Mechanical Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 8 of 8
  • Item
    Thumbnail Image
    Direct Numerical Simulation of Riblets Applied to Gas Turbine Compressor Blades at On- and Off- Design Incidences
    Kozul, M ; Nardini, M ; Przytarski, P ; Solomon, W ; Shabbir, A ; Sandberg, R (ASME, 2023-06-26)
    Any realizable increase in gas turbine efficiency has significant potential to reduce fuel burn and environmental impact. Streamwise micro-groove surfaces (‘riblets’) are well-known as a passive surface treatment to reduce drag, which may be useful in the context of increasing overall gas turbine efficiency. This paper presents the first direct numerical simulation of potentially performance-enhancing riblets on an axial flow high pressure compressor blade, where the micro-geometry of the riblets is fully resolved. The midspan section of a NACA6510 profile is considered at an engine-relevant true chord Reynolds number of 700,000 and Mach number 0.5 based on inlet conditions. Fixed triangular (or sawtooth) riblets are considered in the present numerical campaign. The current high-fidelity computational method permits the extraction of data such as the wall shear stress directly from the riblet surface. At the design incidence, the riblets tend to promote earlier transition to a turbulent flow over the suction side, yet significantly reduce the skin friction over the entire downstream chord to the trailing edge. The riblets reduce the viscous force over the blade by up to 18% at this nominal inflow incidence. Thus the current dataset permits new insight into the action of the riblets, since most studies of riblets on turbomachinery blades have been conducted experimentally where direct measurements of skin friction are not possible. The riblets are also able to reduce the skin friction over the high pressure compressor blade at off-design incidences, a promising result given axial flow compressors must cope with variable operating conditions.
  • Item
    Thumbnail Image
    Heat Transfer Coefficient Estimation for Turbulent Boundary Layers
    Wang, S ; Xia, Y ; Abu Rowin, W ; Marusic, I ; Sandberg, R ; Chung, D ; Hutchins, N ; Tanimoto, K ; Oda, T (The University of Queensland, 2020-12-11)
    Convective heat transfer in rough wall-bounded turbulent flows is prevalent in many engineering applications, such as in gas turbines and heat exchangers. At present, engineers lack the design tools to accurately predict the convective heat transfer in the presence of non-smooth boundaries. Accordingly, a new turbulent boundary layer facility has been commissioned, where the temperature of an interchangeable test surface can be precisely controlled, and conductive heat losses are minimized. Using this facility, we can estimate the heat transfer coefficient (Stanton number, St), through measurement of the power supplied to the electrical heaters and also from measurements of the thermal and momentum boundary layers evolving over this surface. These methods have been initially investigated over a shorter smooth prototype heated surface and compared with existing St prediction models. Preliminary results suggest that we can accurately estimate St in this facility.
  • Item
    Thumbnail Image
    Data-driven combustion modeling for a turbulent flame simulated with a computationally efficient solver
    Talei, M ; Ma, D ; Sandberg, R (ASME: The American Society of Mechanical Engineers, 2020)
    The use of machine learning (ML) for modeling is on the rise. In the age of big data, this technique has shown great potential to describe complex physical phenomena in the form of models. More recently, ML has frequently been used for turbulence modeling while the use of this technique for combustion modeling is still emerging. Gene expression programming (GEP) is one class of ML that can be used as a tool for symbolic regression and thus improve existing algebraic models using high-fidelity data. Direct numerical simulation (DNS) is a powerful candidate for producing the required data for training GEP models and validation. This paper therefore presents a highly efficient DNS solver known as HiPSTAR, originally developed for simulating non-reacting flows in particular in the context of turbomachinery. This solver has been extended to simulate reacting flows. DNSs of two turbulent premixed jet flames with different Karlovitz numbers are performed to produce the required data for training. GEP is then used to develop algebraic flame surface density models in the context of large-eddy simulation (LES). The result of this work introduces new models which show excellent performance in prediction of the flame surface density for premixed flames featuring different Karlovitz numbers.
  • Item
    Thumbnail Image
    LARGE EDDY SIMULATIONS OF HIGH ROSSBY NUMBER FLOW IN THE HIGH PRESSURE COMPRESSOR INTER-DISK CAVITY
    Saini, D ; Sandberg, R (ASME: The American Society of Mechanical Engineers, 2021)
    The focus of the present study is to understand the effect of Rayleigh number on a high Rossby number flow in a high pres- sure compressor (HPC) inter-disk cavity. These cavities form be- tween the compressor disks of a gas turbine engine, and they are an integral part of the internal air cooling system. We perform highly resolved large eddy simulations for two Rayleigh numbers of 0.76 × 10^8 and 1.54 × 10^8 at a fixed Rossby number of 4.5 by solving the compressible Navier–Stokes equations. The results show a flow structure dominated by a toroidal vortex in the inner region of the cavity. In the outer region, the flow is observed to move radially outwards by Ekman layers formed on the side disks and to move radially inwards through the central core region of the cavity. An enhancement in the in- tensity of the radial flares is observed in the outer region of the cavity for the high Rayleigh number case with no perceivable effect in the inner region. The near shroud region is mostly dom- inated by the centrifugal buoyancy-induced flow and the wall Nusselt number calculated at the shroud is in close agreement with centrifugal buoyancy-induced flow without an axial bore flow.
  • Item
    Thumbnail Image
    MACHINE LEARNING FOR THE DEVELOPMENT OF DATA DRIVEN TURBULENCE CLOSURES IN COOLANT SYSTEMS
    Hammond, J ; Montomoli, F ; Pietropaoli, M ; Sandberg, R ; Michelassi, V (ASME: The American Society of Mechanical Engineers, 2020-06-22)
    This work shows the application of Gene &pression Pro­gramming to augment RANS turbulence closure modelling for flows through complex geometries, designed for additive manu­facturing. Specifically, for the design of optimised internal cool­ing channels in turbine blades. One of the challenges in internal coolant design is the heat transfer accuracy of the RANS formu­lation in comparison to higher fidelity methods, which are still not used in design on account of their computational cost. How­ever, high fidelity data can be extremely valuable for improving cu"ent lower fidelity models and this work shows the application of data driven approaches to develop turbulence closures for an internally ribbed duct. Different approaches are compared, and the results of the improved model are illustrated. The work shows the potential of using data driven models for accurate heat transfer predictions even in non-conventional configurations.
  • Item
    Thumbnail Image
    INTEGRATION OF MACHINE LEARNING AND COMPUTATIONAL FLUID DYNAMICS TO DEVELOP TURBULENCE MODELS FOR IMPROVED TURBINE WAKE MIXING PREDICTION
    Akoleka, HD ; Zhao, Y ; Sandberg, R ; Pacciani, R (Search Results Web results ASME: The American Society of Mechanical Engineers, 2021)
    This paper presents development of accurate turbulence closures for wake mixing prediction by integrating a machine-learning approach with Reynolds Averaged Navier-Stokes (RANS)-based computational fluid dynamics (CFD). The data-driven modelling framework is based on the gene expression programming (GEP) approach previously shown to generate non-linear RANS models with good accuracy. To further improve the performance and robustness of the data-driven closures, here we exploit that GEP produces tangible models to integrate RANS in the closure development process. Specifically, rather than using as cost function a comparison of the GEP-based closure terms with a frozen high fidelity dataset, each GEP model is instead automatically implemented into a RANS solver and the subsequent calculation results compared with reference data. By first using a canonical turbine wake with inlet conditions prescribed based on high-fidelity data, we demonstrate that the CFD-driven machine-learning approach produces non-linear turbulence closures that are physically correct, i.e. predict the right downstream wake development and maintain an accurate peak wake loss throughout the domain. We then extend our analysis to full turbine blade cases and show that the model development is sensitive to the training region due to the presence of deterministic unsteadiness in the near wake region. Models developed including this region have artificially large diffusion coefficients to overcompensate for the vortex shedding steady RANS cannot capture. In contrast, excluding the near wake region in the model development produces the correct physical model behavior, but predictive accuracy in the near-wake remains unsatisfactory. We show that this can be remedied by using the physically consistent models in unsteady RANS, implying that the non-linear closure producing the best predictive accuracy depends on whether it will be deployed in RANS or unsteady RANS calculations. Overall, the models developed with the CFD assisted machine learning approach were found to be robust and capture the correct physical behavior across different operating conditions.
  • Item
    Thumbnail Image
    HIGH-FIDELITY SIMULATIONS OF A HIGH-PRESSURE TURBINE VANE SUBJECT TO LARGE DISTURBANCES: EFFECT OF EXIT MACH NUMBER ON LOSSES
    Zhao, Y ; Sandberg, R (Search Results Web results ASME: The American Society of Mechanical Engineers, 2021)
    We report on a series of highly resolved large-eddy simulations of the LS89 high-pressure turbine (HPT) vane, varying the exit Mach number between Ma = 0:7 and 1:1. In order to accurately resolve the blade boundary layers and enforce pitchwise periodicity, we for the first time use an overset mesh method, which consists of an O-type grid around the blade overlapping with a background H-type grid. The simulations were conducted either with a synthetic inlet turbulence condition or including upstream bars. A quantitative comparison shows that the computationally more efficient synthetic method is able to reproduce the turbulence characterictics of the upstream bars. We further perform a detailed analysis of the flow fields, showing that the varying exit Mach number significantly changes the turbine efficiency by affecting the suction-side transition, blade boundary layer profiles, and wake mixing. In particular, the Ma = 1:1 case includes a strong shock that interacts with the trailing edge, causing an increased complexity of the flow field. We use our recently developed entropy loss analysis (Zhao and Sandberg, GT2019-90126) to decompose the overall loss into different source terms and identify the regions that dominate the loss generation. Comparing the different Ma cases, we conclude that the main mechanism for the extra loss generation in the Ma = 1:1 case is the shock-related strong pressure gradient interacting with the turbulent boundary layer and the wake, resulting in significant
  • Item
    Thumbnail Image
    UNSTEADY SIMULATIONS OF A TRAILING-EDGE SLOT USING MACHINE-LEARNT TURBULENCE STRESS AND HEAT-FLUX CLOSURES
    Lav, C ; Sandberg, R (ASME: The American Society of Mechanical Engineers, 2020-06-22)
    The trailing edge slot is a canonical representation of the pressure-side bleed flow encountered in high pressure turbines. Predicting the flow and temperature downstream of the slot exit remains challenging for RANS and URANS, with both significantly over predicting the adiabatic wall effectiveness. This over prediction is attributable to the incorrect mixing prediction in cases where the vortex shedding is present. In case of RANS the modelling error is rooted in not properly accounting for the shedding scales while in URANS the closures account for the shedding scales twice, once by resolving the shedding and twice with the model for all the scales. Here, we present an approach which models only the stochastic scales that contribute to turbulence while resolving the scales that do not, i.e. scales considered as contributing to deterministic unsteadiness. The model for the stochastic scales is obtained through a data-driven machine learning algorithm, which produces a bespoke turbulence closure model from a high-fidelity dataset. We use the best closure (blowing ratio of 1.26) for the anisotropy obtained in the a priori study of Lav, Philip & Sandberg [A New Data-Driven Turbulence Model Framework for Unsteady Flows Applied to Wall-Jet and Wall-Wake Flows, 2019] and conduct compressible URANS calculations. In the first stage, the energy equation is solved utilising the standard gradient diffusion hypothesis for the heat-flux closure. In the second stage, we develop a bespoke heat-flux closure using the machine-learning approach for the stochastic heat flux components only. Subsequently, calculations are performed using the machine-learnt closures for the heat-flux and the anisotropy together. Finally, the generalisability of the developed closures is evaluated by testing them on additional blowing ratios of 0.86 and 1.07. The machine learnt closures developed specifically for URANS calculations show significantly improved predictions for the adiabatic wall-effectiveness across the different cases.