Mechanical Engineering - Theses
Now showing items 1-12 of 220
Investigation of Direct Combustion Noise in Turbulent Premixed Jet Flames Using Direct Numerical Simulations
Direct combustion noise plays a key role in initiating thermoacoustic instability in lean, premixed gas turbines. Moreover, many combustion devices produce a high level of noise while being subjected to increasingly stringent noise regulations. Therefore, achieving a better understanding of sound generation by premixed flames is crucial for designing safer and quieter combustion devices. With the advancement in High-Performance Computing (HPC), high fidelity simulations such as Direct Numerical Simulation (DNS) have received increasing attention as a means to improve our fundamental understanding of turbulent flames. This thesis aims to study the mechanism of sound generation in turbulent premixed jet flames using DNS and state-of-the-art post-processing methods. A DNS dataset featuring sound generation by turbulent premixed flames with simple chemistry is first analysed. Using Spectral Proper Orthogonal Decomposition (SPOD), two types of flame coherent structures responsible for combustion noise are identified. The first type arises in the jet's shear layer, originating from the Kelvin-Helmholtz (K-H) instability and is indirectly producing sound through the deformation of the flame front. The second type is found near the jet centreline and is linked to small, non-linear flame dynamics. Even though their energy content is lower than that of K-H structures, they are an important feature to explain the broadband nature of combustion noise. Then, a framework to identify the location and topology of annihilation events, and to study their generated sound, is presented. This study reveals that different topologies are similar in terms of the generated sound. In addition, a spectral analysis shows that flame annihilation is the physical mechanism by which air-fuel ratio affects the radiated sound amplitude at high frequencies. Finally, DNS datasets of turbulent premixed jet flames with a semi-global and a skeletal chemical mechanism are produced and analysed, to investigate the impact of chemical modelling on combustion noise. Large differences between the two cases are observed in the OASPL value and on the high-frequency side of the acoustic spectrum. Analysis of the acoustic source term resulting from the heat release rate fluctuations demonstrates that the post-flame region has minimal contribution in terms of sound generation. Furthermore, the most exothermic reaction in each mechanism is by far the dominant source of heat release rate fluctuations, and hence sound generation. It is observed that the OASPL discrepancy between the two chemical mechanisms arises from the differences in the peak amplitude of the heat release rate. Then, a modelling approach shows that the acoustic spectrum in the high frequency range results from highly curved flamelets and can be estimated from the flame curvature statistics. This approach demonstrates that the high-frequency acoustic discrepancy arises from a more wrinkled flame in the case featuring the more complex chemistry. Overall, to accurately predict the sound generated by turbulent premixed flames, a reduced chemical mechanism needs to correctly capture the flame response to turbulence.
Optimal Performance of a Wind Farm with and without Battery Energy Storage
The increasing penetration of wind in power systems can result in various challenges for system security and reliability, as well as wind farm investability. These challenges require further research to develop an understanding of the operation of a wind farm with and without the use of complementary technologies, such as energy storage, particularly, when the wind turbines have Frequency Control Ancillary Services (FCAS) capability. This thesis therefore first presents a hierarchical, data-driven, reduced order model of a wind farm, which accounts for the correlation between wind turbines' power output. It finds that the cross-correlation between each turbines' power generation is dependent on the frequency of the wind disturbance and the distance between wind turbines, so that the cross-correlation can be related to the convective length scale in the incoming wind. An investigation of the number of wind turbines required to simulate the wind farm's power generation then indicates that there is an inherent trade-off between model accuracy and complexity. An optimisation model is then developed to investigate the optimal performance of the wind farm participating in the energy and FCAS markets with and without a battery storage system. This requires modelling of the battery costs along with the revenues in the energy and FCAS markets, as well as a simplified version of the Causer Pays Method used in the Australian National Electricity Market (NEM). The optimal performance of the Mt Mercer wind farm, located in Victoria, Australia, is then examined. It is found that the wind farm participation in the FCAS markets can improve its financial performance. This analysis shows that the wind farm mainly tends to participate in the FCAS lower regulation market due to the higher prices of this service and no requirement for curtailment before starting a dispatch interval (precurtailment). An investigation of the impact of wind generation forecast accuracy on the system performance also finds that with a better forecasting system, the total performance of the wind farm improves. Finally, the optimal integration of a lithium-ion battery into a wind farm is examined. Assessments find that the battery is not investable without substantial subsidies when the battery participates only in the energy market. However, it is also found that participation in the FCAS markets can significantly improve the battery's investability, but its financial viability is highly sensitive to FCAS prices. In addition, it is found that the introduction of wind farm frequency control capability reduces the optimal value and capacity of battery storage. Investigation of different wind generation forecasting systems identifies that the improvement of forecast accuracy also reduces the optimal battery value and capacity, as there are fewer opportunities for the battery to reduce the wind farm regulation payments.
Dynamic clamp analysis of ion channel function
Ion channels regulate neuronal excitability by controlling the ion flow through the neuronal membrane. Therefore, neuronal ion channel dysfunction can directly impact the function of neurons in the brain and lead to numerous disorders such as epilepsy and autism. As such, characterizing ion channels can provide valuable insights into these disorders and facilitate treatment. Current ion channel characterization methods predominantly use the voltage clamp (VC) approach which typically involves multiple time-consuming step protocols to capture ion channel dynamics corresponding to different membrane voltages. As such, these approaches neither recapitulate the natural behavior of the ion channel in the brain nor provide the means to directly investigate the relationship between these dynamics and neuronal excitability. This thesis addresses the limitations in current VC methods and implements the dynamic clamp (DC) approach to characterize ion channels. DC represents a real-time closed-loop system that incorporates computational models of neuronal systems with real ion channels. It provides biologically more natural recordings and is capable of directly determining the impact of ion channel dynamics on neuronal excitability. I investigate two applications to characterize ion channels using DC in this thesis. First, ion channel kinetics are mathematically modelled based on DC recordings. Second, the ion channels are functionally characterized using features extracted from DC recordings. My approach to mathematically model fast kinetics of ion channels requires only DC recordings of short time durations. It utilizes efficient global optimization algorithms to estimate the model that best matches the recorded DC data. To further enhance the performance of the approach, I have identified an optimal DC stimulation strategy, and extended optimization method is proposed when recorded DC data is noisy. This approach was more accurate than two existing VC based methods. The model derived from this approach could also predict firing patterns of experimental data with high accuracy. The DC-based approach was next extended to model slow kinetics together with fast kinetics. This involved a new DC stimulation strategy to record DC data and a three-step post-hoc optimization to estimate model parameters. The extended approach could estimate models that accurately predict AP firing patterns that occur during longer/sustained stimulation and could recreate the AP firing seen in experimental data. This thesis also proposes a workflow to functionally characterize ion channel variants such as mutations using DC data. The workflow creates a two-dimensional map of the variants where their positionings correspond to their functional characteristics. Multiple variants of two major neuronal sodium channels, Nav1.1 and Nav1.2, were investigated and their two-dimensional mappings were determined. The results suggested a clear functional separation between variants, not only corroborating the findings of previous conventional functional studies but also providing new insights into variant functionality. The two applications of the DC presented in this thesis demonstrate the potential of DC for ion channel characterization. Collectively, they provide crucial information on ion channel dynamics that will assist the development of effective treatments for neurological disorders involving mutant channels and will enable assessing the direct impact of pharmacological interventions on neuronal excitability.
Understanding loss mechanisms in turbomachinery to increase efficiency
The improvement in the design of any mechanical device is always carried out to increase its efficiency. Turbomachinery, more specifically, the low pressure turbine (LPT) in aircraft engines, is no exception. In order to enhance its efficiency, the specific fuel consumption has to be reduced, which implies different loss mechanisms (mechanisms of entropy generation) in the LPT have to be studied. Conventional loss calculations consider a control volume around the blade and find the total loss in that region. However, there are a number of sources of losses around LPT blades, prominently the mixing out of incident wakes from the stator to the rotor. In order to design more efficient blades, knowing the total loss in the blade control volume is not enough. It is important to quantify the various sources of loss due to wake mixing. Denton has derived an analytical expression for losses due to mixing, which consists of 3 terms - losses from the trailing edge region, losses due to boundary layer effects, and losses due to blade blockage effects. This equation, however, is built upon a number of assumptions such as steady, incompressible flow conditions which are not realised for real flows. Denton does not define a distinctive trailing edge region, methods to calculate boundary layer thickness and base pressures. In order to enhance the applicability of Denton’s mixed out loss equation, it is important to identify the dependence of Denton’s equation on these parameters; which is an important objective of this work. This work aims to improve the robustness of Denton’s equation by analysing the effect of not defining the trailing edge region properly on the total loss; and then proposes 4 criteria to define the trailing edge region. This analysis was conducted on a LPT blade with steady flow conditions and it was found that the losses calculated with the aid of the trailing edge criteria lie within 5% of the loss in the blade control volume. Sensitivity analyses on the total loss from Denton’s equation has been conducted using boundary layer thickness criteria, base pressure and non-uniform flow averaging techniques as input parameters. It was found that there are large variances in the total loss, if the boundary layer thickness is not defined correctly, and therefore researchers need to be very careful and consistent in their selection of boundary layer thickness. Visual observation, along with the boundary layer thickness criteria identified in this work, will serve as a good method to determine the boundary layer thickness. Denton’s equation does not deal with unsteady dissipation effects. In order to analyse those effects, an analysis has been conducted on an unsteady flow field consisting of incoming wakes; which has been broken up into 20 quasi-steady phases. Denton’s losses were calculated for each of these phases in an attempt to understand the effects of unsteady flow on Denton’s equation. Based on the analysis conducted, it was concluded that better averaging techniques are required to average the losses from the 20 phases to quantify the losses due to unsteady effects. Overall, Denton’s analysis, which is dependent on a number of criteria, has the potential to give a good rough estimate of the loss source. Due to the absence of more accurate methods, it is helpful for designers; and if conducted by the same user, allows for good qualitative comparison of different flow configurations. It is however difficult to quantify the exact amounts of losses based on this method. For a quantitative comparison with total losses in the control volume, it is probably necessary to come up with new ways of quantifying the different sources. The report provides ideas for future analyses to further improve the understanding of loss mechanisms via Denton’s equation and other methods.
Computational coronary arterial fluid dynamics: from stenotic to rough-wall flow
Coronary heart disease (CAD) is a leading cause of mortality. CAD is usually caused by the built-up plaque, clinically known as stenosis, that narrows the arteries and hence limits blood flow to the heart. It has been found the development of stenosis is closely associate with local haemodynamics, which can be altered by the changes in arterial shape, such as plaque formation or stenting. The objective of this study is to better understand the post-narrowing and in-stent haemodynamic environments to gain insights into the flow physics associated with the stenosis development. Computational fluid dynamics (CFD) technique is used to firstly simulate pulsatile blood flow in full-scale straight and curved stenotic coronary arteries under a physiological inlet velocity waveform. Flow-pressure relation in simplified multiple sequential stenotic flow is then discussed. Lastly, investigations are carried out in simplified smooth models superposed with the egg carton type surface to mimic in-stent coronary arterial flow. The findings of flow characteristics can contribute towards the future prediction and diagnosis of coronary-related complications. Straight and curved arterial flows with three different degrees of stenosis are studied in both Newtonian and non-Newtonian fluids. The time-dependent inertial momentum is found to contribute to reverse flow development in the proximity of post-stenotic region and negatively correlate with the reverse flow size. Flow velocity and WSS in Newtonian and non-Newtonian fluids exhibit larger difference in response to the increase of stenosis degree. In the presence of curvature, low WSS is found to concentrate at the inner wall after the stenosis collocated with the reverse flow region. With the progressive stenosis severity, the secondary flow morphology distal to the stenosis also evolves into double-paired vortices structure and promotes the growth of reverse flow size. For non-Newtonian flow, smaller reverse flow bubble distal to the stenosis are observed and the difference in Newtonian and non-Newtonian fluids is more profound in higher degree stenosis cases. Overall, the haemodynamic behaviours downstream of stenosis are affected simultaneously by stenosis degree, the instant inertial momentum and secondary flow morphology if curvature presented. The relative location of low WSS (and reverse flow) becomes the potential trigger for the growth of stenosis. The correlation between the haemodynamics in post-stenotic region and potential clinical complications implies the necessity to determine the severity of stenosis. Virtual Fractional Flow Reserve (vFFR), a computational technique calculating pressure drop across stenosis, is considered as an adjunct application to invasive determined FFR, a current standard of clinical practice. Flow-pressure relation across multiple stenoses are analysed using both experimental and numerical approaches. Linear correlation between pressure drop and flow rate irrespective the number of stenosis is found. Negligible difference between steady and pulsatile flows is also observed. The conclusion may improve the clinical applicability of vFFR. In the in-stent coronary arterial flow, both increasing the roughness height and decreasing spacing reduces the shear rates (due to the increased proportion of pressure drag) near the trough of roughness, and hence encourage reverse flow formation. In non-Newtonian fluids, elevated relative viscosities are pronounced near the trough of the roughness while low viscosities are found around the peak of the roughness. This trend becomes more profound by increasing roughness height or decreasing wavelength. As a result, reverse flow is less likely to occur near the trough of roughness in non-Newtonian fluid. By comparing time-averaged velocity and WSS using different blood rheology models, the results show consistency in both qualitative and quantitative perspectives and suggest an interchangeable aspect of rheology models in simulations.
Compressible turbulent wakes in constant area pressure gradients: simulation and modelling
Improving turbomachinery efficiency today is directly related to quantifying and reducing the various sources of losses. Of these, the wake mixing loss, resulting from wakes produced by the blade trailing edge, is of prime interest. These wakes, when developing spatially through the periodic constant area passage in the stator-rotor row, are exposed to pressure gradients which can impact the wake evolution and consequently the wake mixing loss. Since a study on the effect of the pressure gradient in isolation is not possible in an actual turbomachine stage, a canonical case study of a statistically two-dimensional turbulent wake is proposed to understand the underlying flow physics arising from the presence of pressure gradients. The usual canonical setup of subjecting a wake to pressure gradients is achieved by changing the passage area, i.e. if the downstream area decreases (increases) a favourable pressure gradient or FPG (adverse pressure gradient or APG) exists. However, as turbomachinery wakes develop in a constant area passage in the presence of pressure gradients, imposition of pressure gradients in the proposed canonical setup is through a ramped body force term to the momentum and total energy equations while the wake is allowed to develop spatially in a region of fixed width. Employing compressible high-fidelity simulations, the resultant mean velocity statistics, wake width, energy budgets and entropy generation rates are scrutinised to assess the effect of the pressure gradients, and where possible, the similarities and differences to the conventional case of variable area pressure gradients are discussed. The results show that the effect of a constant area pressure gradient on flow statistics is non-trivial, resulting from significant density changes. The pressure gradients also have an effect on the different energy budgets, which produces a gain for FPG and loss for APG in the mean kinetic energy. Consequently, the entropy generation rate, which is indirectly related to the wake mixing loss, diminishes and augments for the FPG and APG respectively, compared to the zero pressure gradient (ZPG). Additionally, the effect of different passage heights ($H$) relative to the wake half-width ($\delta$) is also studied where it was observed that $\delta$ and hence the spreading depends primarily on the wake-wake interaction for small H and pressure gradients for larger H. While the understanding developed through the data generated by the high-fidelity simulations is invaluable, prediction of these flows are still a challenge with the existing low-fidelity tools such as URANS, which are still used for designing turbomachines in the industry. The main issue with URANS is the poor underlying turbulence closure: the Boussinesq approximation. In recent years, turbulence modelling development has received a boost through the assimilation of machine-learning methods and the increasing availability of high-fidelity datasets. Thus, in the next phase of the project, the prediction of the wake flow using URANS is improved by developing a new turbulence closure using the high-fidelity data and a symbolic machine-learning algorithm: the Gene-Expression Programming. The closure is obtained as part of a novel framework developed specifically for flows exhibiting organised unsteadiness, such as the vortex shedding in the wake. The framework, titled the data-driven stochastic closure simulation (DSCS) consists of three parts. First, using triple decomposition, the high-fidelity data is split into organised motion and stochastic turbulence. A data-driven machine-learning approach is then used to develop a closure only for the stochastic part of turbulence. Finally, unsteady calculations are conducted, which resolve the organised structures and model the unresolved turbulence using the developed bespoke turbulence closure. A demonstration for DSCS is presented using the canonical dataset of the ZPG wake generated previously. The obtained closure suggests lowered turbulent diffusion from the closure, which upon implementation shows a significant improvement in the mean velocity and Reynold stress profiles compared with the standard turbulence closure. The developed closure is then evaluated on 6 different case studies: ZPG wakes at different Reynolds numbers and wakes in the presence of pressure gradients, where the new closure consistently outperforms the standard closure in all the cases, which means the closure is not only re-usable but robust to changing flow conditions. Thus, the results and observations on the turbulent wake evolution in the presence of constant area pressure gradients, both from a simulative and modelling standpoint, can serve as a guide in the design of turbomachinery, i.e. in predicting and minimising the loss produced by wake mixing.
Turbulence Model Development and Implementation for Low Pressure Turbines using a Machine Learning Approach
The design of the gas turbine, which is the work horse of the aviation industry, has reached a high degree of maturity; given that the first gas turbine flew in the late 1930s. Despite this, the industrial sector is looking towards harnessing even incremental points of efficiency with novel methods, which can translate to millions of dollars of savings and large reductions in carbon emissions. Current gas turbine design is primarily carried out using low-fidelity simulations due to their low cost and user-friendliness. However, these simulations lack the accuracy of high-fidelity simulations, largely due to the use of a linear stress-strain relation – the Boussinesq approximation. With the increase in the power of computing, high-fidelity simulations are becoming increasingly commonplace but are still not feasible as an iterative industrial design tool. In order to bridge the gap between high and low-fidelity simulations, certain high-fidelity data sets can be harvested to extract meaningful physics-based insights with machine learning processes to improve the accuracy of iterative low-fidelity calculations. This thesis focuses on improving low-fidelity modelling strategies (Reynolds–Averaged Navier–Stokes (RANS)) for low pressure turbine (LPT) flows, by harnessing meaningful physics-based information from high-fidelity data using a machine learning approach – gene expression programming (GEP). Improvement in the accuracy of the existing linear stress-strain closure relations is sought by developing machine-learnt explicit algebraic Reynolds stress models (EARSM). Of the many physical phenomena that occur in an LPT, designers are very interested in being able to accurately model the wake mixing using RANS, as this phenomenon governs the stagnation pressure loss in a turbine and also because existing RANS-based turbulence models fail to accurately predict this phenomenon. Therefore, the goal of this thesis is to develop and implement non-linear EARSMs to enhance the wake mixing in LPTs using GEP and high-fidelity data sets at realistic engine operating conditions. Firstly, an extensive analysis of the existing RANS-based turbulence models for LPTs with steady inflow conditions was conducted. None of these RANS models were able to accurately reproduce wake loss profiles based on high-fidelity data. However, the recently proposed k-v2-omega transition model was found to produce the best agreement with high-fidelity data in terms of blade loading and boundary layer behaviour and was thus selected as the baseline model for turbulence closure development. Using different training regions for model development, the resulting closures were extensively analysed in an a priori sense (without running any CFD) and also while running CFD calculations. Importantly, to assess their robustness, the trained models were tested both on the cases they were trained for and on testing, i.e. previously not seen, cases with different flow features. The developed models improved prediction of the Reynolds stress, TKE production, wake loss profiles and wake maturity across all cases. The existing GEP framework was extended to include RANS feedback during the model development process. It was found that the models generated via this method allow greater flexibility to the user in terms of selecting metrics of direct interest. The models returned offer a higher degree of numerical stability and robustness across different flow conditions and even geometries. Models developed on the LPT were tested on a high pressure turbine case and vice-versa and some of the models were able to reduce the peak wake loss error by up to 90% over the Boussinesq approximation in this cross-validation study. A zonal based model development approach was proposed with an aim to enhance the wake mixing prediction of unsteady RANS calculations for LPTs with unsteady inflow conditions. High-fidelity time-averaged and phase-lock averaged data at a realistic isentropic Reynolds number and two reduced frequencies, i.e. with discrete incoming wakes and with wake ‘fogging’, were used as reference data. This is the first known study to develop machine learning based turbulence models for unsteady flows, and also the first study to use phase-lock averaged data for the same. Models developed via phase-lock averaged data were able to capture the effect of certain prominent physical phenomena in LPTs such as wake-wake interactions, whereas models based on the time-averaged data could not. Correlations with the flow physics led to a set of models that can effectively enhance the wake mixing prediction across the entire LPT domain for both cases. Based on a newly developed error metric, the developed models have reduced the a priori error over the Boussinesq approximation on average by 45%. Based on the analysis conducted in this work, a few best practice guidelines have been proposed which can offer future designers an insight into the GEP-based model development process. Overall, this study showcases that GEP is a promising avenue for future RANS-based turbulence model development.
Geometric properties of streamlines in turbulent wall-flows
Streamline geometry has been studied in case of turbulent wall flows. Complex but coherent motions form and rapidly evolve within wall-bounded turbulent flows. Research over the past two decades broadly indicates that the momentum transported across the flow derives from the dynamics underlying these coherent motions. This spatial organization, and its inherent connection to the dynamics, motivates the present research. The local streamline geometry pertaining to curvature $(\kappa)$ and torsion $(\tau)$ has apparent connection to the dynamics of the flow. The present results indicate that these geometrical properties change significantly with wall-normal position. One part of this research is thus to clarify the observed changes in the streamline geometry with the known structure and scaling behaviours of the mean momentum equation. Towards this aim, the curvature and torsion of the streamlines at each point in the volume of existing boundary layers and channel DNS has been computed. The computation of $\kappa$ and $\tau$ arise from the local construction of the Frenet-Serret coordinate frame. The present methods for estimating $\kappa$ includes components of curvature in the streamwise, wall-normal and spanwise direction. The analysis shows that even though the mean wall-normal velocity is zero (e.g., for channel flow), the wall-normal curvature component shows a notable positive peak close to the wall. This arises from the strong wallward flow followed by a weak movement of the streamlines away from the wall. The correlation coefficient and the conditional average of the wall-normal velocity corresponding to the wall-normal curvature exhibit an anti-correlation between them. The probability density function of the curvatures have been calculated at some wall-normal locations of interest and compared with a scaling of the exponent of $-4$ for both total and fluctuating field. This scaling of curvature values describes the geometric features of the length scales that are smaller than the Kolmogorov scale. The onset of this scaling with wall distance has a potential connection to the three-dimensionalization of the vorticity field and the stagnation points structure in the inertial domain. In this region, the mean radius of curvature scales like Taylor microscale. The probability density functions of the wall-normal curvature show that high curvature regions similar to those in isotropic flow begin to appear outside the viscous wall layer. The standard deviation for torsion exhibits a decreasing effect with distance from the wall. The torsion to curvature ratio reveals the intensity of out of plane motion of the streamlines relative to their in-plane bending. The joint pdf of curvature with velocity magnitude supports the notion that large curvature values correspond to the region near a stagnation point. Furthermore, the joint pdf results between curvature components reveal the orientation of the streamlines at different wall-normal locations. Overall the curvature and torsion statistics examined thus far point to intriguing correlations with the four layer structure associated with known structure of the vorticity field in turbulent wall-flows.
Methods for profling heterogeneous sequencing data
Metagenomics which utilises high throughput DNA sequencing is widely applied to study bacteria and viruses and their effects on their host environments. Metagenomics involves collective sequencing of genetic material of the species in an environmental sample, subsequently requiring robust methods to elucidate the characteristics of the species in the sample from the heterogeneous data. A key step in learning the taxonomic diversity of a metagenomic sample is binning. Binning refers to grouping the nucleotide sequences belonging to an individual or closely related species. Identification of appropriate features and machine learning methods is essential in binning a metagenome of many unknown genomes. A significant challenge in binning metagenomic sequences is to bin a sample of closely related species. The thesis addresses this challenge and proposes a new two-tiered workflow called Coverage and composition based binning of Metagenomes (CoMet) for binning assembled sequences (contigs) of a metagenomic sample. It is demonstrated that a combination of features coupled with appropriate unsupervised learning methods can improve the precision in binning while enabling characterization of more species in a metagenome of species with similar genetic variants. Species richness is a key species diversity measure which corresponds to the number of species in an environmental sample. Estimating species richness of a metagenome of viruses (i.e. a virome) based on the reference data is challenging because of the limited amount of sequence data of viruses available in reference databases. A limitation identified with the methods that do not rely on reference sequence data in estimating species richness is the assumption of equal genome length for all the species in the sample. The thesis addresses this limitation by proposing a method to estimate species richness from a virome considering the variability of the genome lengths of species in the sample. The proposed method enables inference of genome lengths distribution from the metagenomic sequence data in addition to estimating the species richness. RNA-Seq refers to a set of techniques enabling the effective study of the transcriptome. An application of RNA-Seq is differential transcript usage analysis (DTU) which refers to inferring differences in expressions of multiple transcripts (isoforms) of a gene across different conditions from the sequencing data generated in an experiment. A key step in RNA-Seq data analysis is aligning the sequence reads to a reference sequence. SuperTranscripts is an alternate reference sequence proposed mainly for analyzing organisms with no/incomplete reference sequences. The thesis explores the use of superTranscripts to test for DTU in organisms with good reference sequences and annotations. Three definitions of counting-bins based on superTranscripts which are further used to infer DTU in genes are considered. The results with simulated data of fruit fly and human demonstrate that superTranscripts enable the analysis of DTU in genes with better control in False Discovery Rate (FDR) than the standard methods while not requiring the prior estimation of isoform abundances. The analysis of real data demonstrates the effectiveness of using superTranscripts to visualize the DTU in genes.
Robust Object Manipulation for Fully-Actuated Robotic Hands
Object manipulation is the ability to rotate/translate an object held within a grasp. Humans have exploited this ability to effectively use tools and interact with the environment. Over the past decades, robotics research has worked to translate object manipulation capabilities to robotic hands. Applications of object manipulation for robotic hands include autonomous manipulation, teleoperation in extreme environments, and prosthetics. Despite advancements made, robotic hand research has not yet progressed to handle uncertainties found in the real world. Many existing grasp methods to control robotic hands require a priori information and high fidelity sensors typically restricted to laboratory settings. The objective of this thesis is to develop robust means of object manipulation for robotic hands. This thesis focuses on the concept of tactile-based blind grasping to address robustness concerns in real-world applications. In tactile-based blind grasping, the robotic hand only has access to proprioceptive (joint angle) and tactile measurements. No a priori information about the object is known. This reflects real-world applications, such as prosthetics, where disturbances in the form of uncertain object models are part of everyday use. In this dissertation, novel object manipulation control methods are developed for robotic hands in tactile-based blind grasping. The first method ensures stability of the hand-object system to a desired object pose despite uncertain object weight, shape, center of mass, and contact locations. The second method is an extension of the first, but also ensures the contact points do not slip during the manipulation motion. The final control addresses all grasp conditions that must be satisfied, including slip, to ensure the grasp does not fail during manipulation. This final control is applicable not only to the control methods presented here, but to most manipulation controllers developed in the literature. The proposed controllers are presented with associated stability guarantees and validated in simulation and hardware.
Dynamic decision making within spatially-explicit systems subject to environmental uncertainty
Dynamic decision making under uncertainty provides managers with tools to make better-informed decisions to improve objectives such as profits, throughput through networks, scheduling, and reducing the impact of extreme events. However, many systems can both impact and be impacted by their surrounding environment. Businesses and governments must therefore account for these factors. This necessitates the development of models that can account for ways to dynamically control systems over time while accounting for environmental effects. Dynamic programming (DP) is one popular method of finding optimal solutions to multi-stage decisions. It uses backward induction to recursively compute the effect of decisions at earlier stages on future outcomes and therefore the objective function. This principle has been extended to systems subject to uncertainty via ‘stochastic dynamic programming’ (SDP). SDP is similar to ‘stochastic programming’, in that it takes into account a range of possible future outcomes. In contrast to classical DP, SDP produces a decision policy rather than a fixed sequence of decisions over time. This allows decision makers to adapt optimally as uncertain variables are revealed. However, these approaches suffer from the ‘curse of dimensionality’ and can usually only tractably deal with systems that have a small number of states and controls. This limits their application in real-world scenarios. This is exacerbated by the fact that many environmental systems have spatial variability, further adding to the dimensionality of the problem. Techniques such as ‘approximate dynamic programming’ (ADP) have sought to address this problem by introducing policy approximations that map the current system state and possible decisions to expected outcomes (Powell, 2014). Notable progress has been made through various applications such as financial and real options, vehicle routing, and energy distribution. This thesis builds upon these developments to evaluate flexibility in two application areas where it has not been used previously: road design through ecologically-sensitive areas and dynamically relocating aircraft to fight wildfires. In the process of analysing decisions, the thesis also develops application-specific approaches for dealing with high dimensionality in states and controls as well as nesting the evaluation approach within or around other optimisation techniques. The thesis first explores the benefits of explicitly accounting for animal movement and mortality models into optimal road path design with traffic flow. This contrasts with existing approaches that either ignore regions with vulnerable species in them or avoid habitat completely. This is extended to developing a method to find high value roads while also accounting for the fact that road traffic can be optimally re-routed dynamically over time. The approaches introduced use various computational improvements that improve the tractability of the system: surrogate functions that remove the need for computing stochastic dynamic programming for every candidate road in the road design algorithm, as well as a novel state reduction technique. Finally, the thesis extends these techniques to managing aerial resources for fighting wildfires. It explores the benefit of dynamic decision making through two approaches: Model Predictive Control, and SDP. Through these approaches, the thesis shows that it may be possible to reduce expected fire damage over a day by allowing aircraft to be dynamically relocated using these techniques.
Sensor and actuator selection for feedback control of fluid flows
The present thesis regards linear estimation and control for two fluid flows, with a particular focus on the placement of sensors and actuators. In the first part of the thesis, we study the complex Ginzburg-Landau equation, a simple model for spatially developing flows such as jets, wakes and cavities. (This equation can be seen as a low-dimensional substitute for the Navier-Stokes equations.) The specific focus is on the extent to which estimation and control are (i) fundamentally difficult and (ii) limited by having only a single sensor and a single actuator. To answer these questions, we study three problems. First, we consider the optimal estimation problem in which a single sensor is used to estimate the entire flow field (without any control). Second, we consider the full information control problem in which the whole flow field is known, but only a single actuator is available for control. Third, we consider the overall input-output control problem in which only a single sensor is available for measurements; and only a single actuator is available for control. By considering the optimal sensor placement, optimal actuator placement or both while varying the stability of the system, fundamental placement trade-offs are made clear. We discuss implications for effective feedback control with a single sensor and a single actuator and compare the results to previous placement studies. In the second part of this thesis, we look at an incompressible turbulent channel flow at a friction Reynolds number of Re$_\tau = 2000$. A linear Navier-Stokes operator is formed about the turbulent mean and augmented with an eddy viscosity. Velocity perturbations are then generated by stochastically forcing the linear Navier- Stokes operator. The objective is to estimate and control these perturbations. The estimation and control problems perform best for the largest scales that (i) are high in energy when stochastically forced, (ii) exhibit large transient growth and (iii) are coherent over large wall-normal distances. We determine the locations of sensors and actuators for which estimation and control are most effective by looking at two arrangements: (i) placing them at the wall; and (ii) placing them some distance off the wall. Finally, it is shown that a control arrangement with a well-placed sensor and actuator performs comparably to either measuring the flow everywhere (while actuating it at a single wall height) or actuating it everywhere (while measuring it at a single wall height). In this way, we gain insight (at low computational cost) into how specific scales of turbulence are most effectively estimated and controlled.