Mechanical Engineering - Theses
Now showing items 1-12 of 239
Online Personalisation of Human-Prosthetic Interfaces
Upper-limb loss affects over 541,000 people in the US, and over 3,500 new amputations are reported each year in countries like Italy and the UK. The daily life of people living with upper-limb loss is severely impacted as the arm is a human's principal means of interaction with the environment. Moreover, the limitations of current human-prosthetic interfaces result in prosthesis users relying on compensatory motion to achieve activities of daily living, which may result in overuse injuries. This is due to prosthesis users only being able to control the degrees-of-freedom in the prosthesis sequentially. To address this challenge, the prosthetics community has looked into motion-based human prosthetic interfaces. Novel motion-based human-prosthetic interfaces use the motion of the residual-limb to determine the motion of the prosthesis. Typically, this relationship between the residual-limb and prosthesis is established from the motion of able-bodied individuals. However, their application to prosthesis users has been a challenge due to individual differences in motor behaviour and amputation physiology. Therefore, it has been identified in the literature that kinematic synergy-based HPIs need to be personalised to their users. The scope of the research presented in this thesis is to provide a framework for autonomously personalising human-prosthetic interfaces. The proposed framework is based on a data-driven optimisation approach. The contributions of this thesis surrounding the proposed data-driven-based framework are as follows. First, the feasibility of using online optimisation methods in motion-based human-prosthetic interfaces is demonstrated experimentally. Second, the features of motor preference and motor adaptation in human motor behaviour, which affect the performance of a task with a motion-based prosthesis, are experimentally observed and characterised in a grey-box model. Third, an online personalisation algorithm for human-prosthetic interfaces was developed based on the algorithm of Fast Extremum Seeking. The algorithm uses the grey-box model of human motor preference and adaptation to inform the design of the components of the algorithm. An alternative model-based method for motion-based human-prosthesis interface personalisation is also proposed, where user-specific kinematic information is employed. This novel ``task-space synergy'' incorporates task information in the formulation of kinematic synergy-based human-prosthetic interfaces. The method uses desired hand path information, a kinematic model of the human-prosthesis arm, and the motion of the residual-limb to determine the motion of the prosthesis joints.
A Study of Compressed Natural Gas Fuelling in a Downsized and Boosted, Multi-Cylinder, Direct Injection Spark-Ignition Engine
Improvement in the fuel economy of the internal combustion engine is imperative given the demand of increasingly strict fleet average carbon dioxide (CO2) emissions limit. Compressed natural gas (CNG) has emerged as a promising automotive fuel to meet this demand due to its favorable fuel properties. However, today's CNG fuelled engines with port fuel injection technology have inferior peak performance compared to gasoline direct injection (GDI) engines, primarily due to lower volumetric efficiency. CNG direct injection technology has the potential to overcome these limitations. Therefore, this study investigates the performance of a 4-cylinder, downsized and boosted, spark-ignited production engine when employing directly injected compressed natural gas (DI CNG) and gasoline direct injection (GDI). This work first examines the impact of various injection timing strategies with DI CNG operation and SOI timings in the range of 240 to 280 deg bTDCFire are shown to be optimal for engine performance. A comparison of part-load performance with CNG and gasoline then shows similar fuel efficiency under stoichiometric conditions. DI CNG operation with fuel injection after IVC also achieves the same peak torque as GDI operation at low engine speeds with substantial fuel economy benefits. The impact of charge dilution with both EGR and excess air is then examined with DI CNG and GDI operation. For equivalent dilution levels with CNG fuelling, air dilution demonstrates higher brake thermal efficiency (BTE) than that with stoichiometric EGR dilution. Both DI CNG and GDI exhibit similar engine performance at a given level of air dilution, although higher BTE is observed with GDI than that with DI CNG at a similar EGR rate. Premixed turbulent combustion simulations are then performed for both fuels under stoichiometric, EGR, and lean conditions. The theory of Bradley is then used to establish a relationship between the modelled onset of flame quenching and increased COV of IMEP and UHC emissions for both fuels and both charge dilution strategies. This provides physical insight into the role of charge dilution on combustion and engine performance. Finally, this work shows that a DI CNG engine utilizing stoichiometric, EGR, and lean-burn operation consistently demonstrates lower engine-out total CO2 equivalent emissions than the baseline stoichiometric GDI engine over a range of operating conditions. Furthermore, several forms of advanced DI CNG engine operation are examined, including internal EGR, multi-pulsing, and the combined use of air and EGR dilution. The latter is shown to avoid high engine-out NOx emissions at some operating conditions, and may potentially be superior to the sole use of air or EGR dilution.
Hierarchical Economic Model Predictive Control of an Isolated Microgrid
Isolated microgrids are small power systems which are electrically isolated from the main electricity grid. They have existed for many decades in mine sites, remote communities and other locations where connection to major electricity networks is not feasible. Historically, fossil fuel generators - typically diesel - have been the primary source of power in these systems. However, in recent years, solar PV and wind, coupled with energy storage have been included in many isolated microgrid designs. While the inclusion of these technologies has the potential to significantly reduce both the cost of supplying electricity and greenhouse gas emissions, improved energy management and control strategies are required to realise their full potential. Economic model predictive control (EMPC) is one method that is well suited to isolated microgrid control. EMPC-based microgrid energy management systems (EMSs) have been shown to provide performance improvements relative to conventional methods. However, no centralised EMPC-based primary control algorithms have previously been proposed for isolated microgrids. Such an approach has the potential to further reduce operating costs by responding to transient events, such as solar array shading, in a more economically efficient way compared to existing methods. This thesis therefore investigates the development and application of a two-layer EMPC framework for isolated microgrids, in which both the primary control layer and the energy management system utilise EMPC. Any EMPC algorithm suitable for microgrid primary control should provide guarantees of closed loop stability since the primary control layer is responsible for managing the dynamic behaviour of the microgrid. However, existing EMPC formulations are not well suited to the microgrid primary control problem. Therefore, a novel EMPC formulation suitable for a class of problems that includes microgrid primary control is developed in this thesis and proven to guarantee closed-loop stability. The developed control framework is experimentally demonstrated as a controller for an isolated microgrid using a test-bed designed and manufactured as part of this. The test-bed replicates a typical off-grid residential dwelling and is comprised of an AC-coupled lead acid battery bank, a gasoline-fuelled generator, a simulated solar PV system and a dynamic electrical load. A detailed model of the test-bed is developed and experimentally validated, and non-dimensional time scale analysis is used to simplify the model for use in the proposed two-layer EMPC-based microgrid controller. The controller is shown to successfully facilitate the continuous supply of electricity and ensure all operational constraints are satisfied for a range of realistic solar and load conditions. The developed controller is compared to two alternative algorithms, one which is typical of microgrids deployed in the field and another which is representative of current state-of-the-art methods, which only attempt to optimise performance in the EMS but not the primary control layer. The control system performance is experimentally compared for both a 5 minute and a 10 hour period, while the experimentally-validated model is used to compare performance over a full year. The results in this thesis indicate that application of the proposed, novel, two-layer EMPC algorithm can reduce operating costs and CO2 emissions by 5-10% relative to conventional, rule based controllers, and by 10-15% if improved solar and demand forecasts are available. Most of these benefits are realised by the EMS since the proposed EMPC algorithm only achieved reductions of up to 5% compared with current state-of-the-art methods.
Optimization of Sustainable Residential Heating and Cooling Systems
Increased attention has been given to energy efficient, renewable energy systems for Heating, Ventilation and Air Conditioning (HVAC) in buildings as these often account for 40% or more of the total building energy consumption. Among them, Ground Source Heat Pumps (GSHP) are becoming increasingly attractive due to their reliability, low environmental impact and high efficiency when compared to conventional HVAC systems. However, their uptake has been limited due to the high initial cost involved in the drilling of boreholes for exchanging heat with the ground via HPDE (high density polyethileine) pipes. In addition, their system performance may also decline over long operating horizons if the annual heating and cooling loads are severely unbalanced. The application of hybrid ground source heat pump systems have therefore been proposed as an effective alternative approach that can mitigate these challenges and improve overall system performance. Hybrid systems offset some percentage of the demand with the use of a supplemental source or a sink of heat. Solar thermal or conventional resistive heaters can be used as supplementary heat sources, thus forming a hybrid ground source heat pump system for heating-dominant climates. However, finding optimal design parameters when designing these systems is crucial to minimize the total life cycle cost and to improve overall system performance. In addition, due to their high initial cost, it is also important to conduct a feasibility study considering the full life cycle cost in comparison to conventional systems. Furthermore, the effect of local climatic conditions and economic structures on the system design and performance needs to be evaluated and understood to be able to select the most economical HVAC system for a given geographical location. Implementing an intelligent control strategy can further improve the system performance by delivering the energy demanded efficiently. A significant percentage of the operational cost can be reduced by integrating the peak and off peak electricity prices into the controller. In addition, studies have shown that a substantial amount of cost and energy can be saved by incorporating weather and occupancy predictions into the controller. However, due to the uncertain nature of these variables, an effective controller must consider the uncertainties of the system dynamics. This thesis explores optimisation of the system design for heating dominant climates while assessing their feasibility over conventional systems. The results suggest that optimally designed hybrid GSHP systems can achieve significant cost savings (up to 32%) compared to conventional heating and cooling systems. In addition, efficiency improvements in the operation of hybrid GSHP systems are also investigated to overcome the barriers associated with these systems and to make them a cost effective, attractive technology for building heating and cooling systems. The study demonstrated a considerable amount of operational cost reduction by incorporating uncertainty into the HVAC controller.
The significance of research and motor octane numbers to anti-knock performance and fuel efficiency of modern spark-ignition engines
Improving fuel efficiency and reducing CO2 emissions are the primary targets of Spark-Ignition (SI) engine development. Realizing these targets is limited by an abnormal combustion phenomenon known as engine knock which depends on both fuel’s anti-knock properties and engine’s thermodynamic conditions. Fuel’s knock resistance is conventionally quantified by Research Octane Number (RON) and Motor Octane Number (MON) that are measured using the Cooperative Fuel Research (CFR) engine under standardized conditions. Whereas higher RON and MON generally means higher knock resistance, the relevance of two octane numbers to knock resistance in modern SI engines has changed, largely due to the different in-cylinder conditions than those in CFR engines. The Octane Index, OI=(1-K)*RON+K*MON, has been found to be a more suitable indicator of knock resistance in modern engines. The K factor in the OI model weights the relative contribution of RON and MON to fuel’s actual knock resistance and is primarily dependent on engine design and operating conditions. Quantifying the K factor is of central importance to understanding the knock in modern SI engines. This work therefore investigates the significance of RON and MON to modern engine combustion using the Octane Index model. It first evaluates the methods for determining OI and K reported in the literature and identifies that the method that matches the anti-knock performance of primary reference fuels (PRFs) with the interested fuel produces accurate results. This method does not require specially blended fuel sets or assume arbitrary correlations between OI and knock-limited performance. A novel fuel-blending system is developed in this work to implement this method, which is capable to supply PRF mixtures of varying octane (0 to 100) on the fly to the engine. K values are then determined over the operation map of a 4-cylinder 2L Ford EcoBoost engine with a standard EPA certification gasoline (RON 91.6). The K values vary from low to high (-1 to 1.1) and is negative at most knock-limited conditions tested. The experiment data are further analyzed with GT-Power simulation to investigate the relation between in-cylinder end-gas states and K values. It reveals that the variation of K with engine operating conditions is primarily driven by the unburned gas temperature at the later stage of combustion just before the onset of autoignition. The engine K-maps are then applied to determine the K-distributions in several standard drive cycles where the engine is adopted to a mid-sized passenger vehicle in conventional, full hybrid and plug-in hybrid powertrain configurations. For all drive cycles, there is a significant fraction of engine operating time and fuel consumption at conditions of positive K. However, with conventional powertrain, the knock-induced fuel efficiency losses primarily occur at conditions where K is near zero or negative. With deeper degree of electrification, hybridized powertrains are more knock-limited and the fuel efficiency losses due to knock mainly occur at conditions of more negative K. Further analysis is conducted to quantify the impact of RON and MON on the knock-limited fuel efficiency losses. For all drive cycles and powertrains studied, increasing RON has a strong effect on fuel efficiency improvement over a drive cycle, while increasing MON yields neutral or modestly negative effect on fuel efficiency improvement.
Direct numerical simulation of flame-wall interaction and flame-cooling air interaction
The interaction of a flame with a relatively cold combustor wall with or without cooling air jets, i.e. flame-wall interaction (FWI) and flame-cooling air interaction (FCAI) influences emissions and fuel consumption. In particular, with the current trend towards increasing the power density in energy-producing systems, these phenomena become even more important in the new generation of modern gas turbines. As a result, a full understanding of FWI and FCAI and their impact on the produced emissions is a topic of interest. In this thesis, a preheated, premixed methane/air flame is studied in the context of FWI and FCAI using direct numerical simulation (DNS). First, two-dimensional (2D) DNSs are performed to study the impact of unsteady, laminar flame-wall interaction on flame dynamics, wall heat transfer and near-wall CO emissions. The flame is excited by imposing velocity perturbations at the inlet to the flow for several forcing frequencies. The flame dynamics over a forcing cycle is investigated for low, intermediate and high forcing frequencies. The significance of low-activation energy radical recombination reactions near the wall is also analysed. These reactions contribute to about 50% of the overall heat release rate at the wall at the quenching instant. An investigation of the near-wall CO transport mechanisms revealed that the near-wall CO transport close to the flame tip is dominated by convection and diffusion. Second, a parametric study of flame-cooling air interaction (FCAI) is performed using 2D DNSs of forced laminar flames. The effects of injection of coolant jets through the wall on the flame dynamics, the near-wall CO and the wall heat flux are explored. The forcing frequency, the coolant mass flux, the position of the cooling hole and the coolant type are varied in this analysis. Several factors including the dilution of the flame tip by the coolant, variations in the trajectory of the cooling jet are found to impact the flame and CO behaviours. Furthermore, a modelling framework to predict near-wall CO due to FCAI based on one-dimensional unstrained laminar freely propagating flame simulations is proposed. Third, analysis of FWI and FCAI under turbulent flow conditions are performed in a three-dimensional computational domain. Under FWI conditions, vorticity-induced flame structures are found to impact the wall heat flux and CO at the wall. Under FWI and FCAI conditions, the CO characteristics are investigated using the thermochemical states of CO. Finally, the performance of the model proposed to predict near-wall CO due to FCAI in the 2D flames is evaluated under the turbulent flow conditions and showed promising results.
Healthy patellofemoral kinematics and contact forces during functional activities
A better understanding of normal knee function is critical to the treatment of knee disorders. Limited data are available on knee biomechanics during functional activities such as walking, particularly in relation to the articulation of the patella. Three aims were formulated to address this gap in knowledge: 1) analyse the kinematics of the patellofemoral joint during functional activities, 2) determine the region of cartilage contact in the patellofemoral and tibiofemoral joints and their relationship to cartilage thickness, and 3) calculate the distribution of medial-lateral contact loads in the patellofemoral and tibiofemoral joints during level walking. These aims were achieved by first accurately measuring three-dimensional kinematics of the patellofemoral joint as healthy young people performed six activities: level walking, downhill walking, stair descent, stair ascent, open-chain knee flexion, and standing. These data were examined for notable kinematic characteristics of the patella during ambulatory activities and to determine how the motion of the patella and the tibiofemoral flexion angle are linked (i.e., coupled) together. Cartilage models were created of each participant’s knee in order to determine the region of cartilage contact for each of the activities performed, and to identify correlations between cartilage contact and cartilage thickness. Finally, musculoskeletal models with full six degree of freedom patellofemoral and tibiofemoral joints were created, used to calculate the medial-lateral contact loads at the knee during level walking, and finally validated against the measured kinematic data. These procedures have revealed important findings. Patellar flexion and anterior translation were coupled and linearly related to the tibiofemoral flexion angle. Medial shift and superior translation were likewise coupled to tibiofemoral flexion, and both displayed notable characteristics for all ambulatory activities: the patella shifted laterally at low tibiofemoral flexion angles and underwent rapid superior translation just prior to heel strike. Based on the activities tested here, the patellofemoral joint can effectively be modelled as a one degree of freedom joint. The centroid of cartilage contact for both joints appears to be determined by the tibiofemoral flexion angle, and hence geometry, rather than activity. Patellofemoral contact was concentrated on the lateral side of both the patella and the femur. In each pair of contacting regions within the knee, one side of the pair exhibited a positive relationship between cartilage thickness and contact (i.e., the medial and lateral tibial plateaus and the patella), while the other exhibited a weak or non-existent relationship (i.e., the medial and lateral femoral condyles in the tibiofemoral joint and the femur in the patellofemoral joint). The patellofemoral joint displayed two peaks in the contact force during level walking, one in early stance and one in swing phase, both at approximately 0.55-times bodyweight. Most of the patellofemoral contact force was transmitted through the lateral facet of the patella. The posterior component of hamstring muscle force contributed to the load transmitted to the patellar facets. These findings may assist with the diagnosis and treatment of many common knee disorders and will provide a useful source of information for future investigations into the knee.
Assembly line sequencing for product-mix
This thesis is concerned with the sequencing of various models of a product when these are manufactured on one assembly line using product-mix. A simplified model of the assembly line is postulated. Four heuristic algorithms are developed which aim at minimizing assembly line length while avoiding operator interference. Two of these algorithms are used in a factorial experiment to determine the relationship between assembly line length and five factors. These factors are characteristics of the production requirement and workload balance. From the experimental results, empirical equations are developed which are a useful aid in the design of new assembly lines or the balancing of existing assembly lines. The experimental results are also analyzed to determine a range of sequencing problems for which near optimal sequences can be expected using the two algorithms.
A Framework for Multidimensional Analysis and Development of Numerical Schemes
Partial differential equations are found throughout engineering and sciences. Under the constraint of complex initial and boundary conditions, most of these complex equations do not have analytical solutions, and therefore require solution by numerical methods. In the context of this thesis, the goal is to examine the governing equations of fluid mechanics (Euler and Navier Stokes) which require both spatial and temporal discretization. Under the effects of numerical differencing, the numerical solution is subjected to both dispersion and dissipation error. These error can be identified and analyzed through spectral analysis method. The analysis of numerical schemes under a coupled spatial temporal framework in one dimensional wavespace is well understood. However, the extension of these methods to multidimensional wavespace and the spectral properties of a hybrid finite difference/Fourier spectral spatial discretization method in multidimensional space is not well understood. Furthermore, the extension of this multidimensional analysis framework to non-linear shock capturing schemes is not done before. This dissertation introduces a generic method for the spectral analysis of linear and non linear finite difference schemes in multidimensional wavenumber space. The aim is to understand the properties of the coupled system for a series of representative spatial and temporal schemes. Theoretical predictions are then compared with numerical solutions based on model equations such as the advection, linearized Euler and linearized Navier Stokes equations. Finally, this framework is used to develop a spectrally optimized hybrid shock capturing scheme which switches between a linear and non linear scheme. Various canonical numerical examples were conducted in order to compare the spectral properties of the new scheme with existing numerical schemes. For the one dimensional linearized Euler equation, it was shown that the dispersion relation belonging to the largest eigenvalue provides the limiting criteria for the stability limit as well as the onset of dispersion error. When the linear spectral analysis method is extended to the two dimensional wavespace, the dispersion and dissipation properties of the coupled schemes become a function of both the reduced wavenumber and the wave propagation angle. When the two dimensional linear spectral analysis method is extended to the two dimensional linearized Compressible Navier Stokes equations (LCNSE), viscous and acoustic effects are taken into account in addition to the convection effects. The addition of the acoustic term to the dispersion relation leads to a coupling of the resolution characteristic such that the group velocities in either spatial direction become a function of the wavenumber in both spatial directions. The two dimensional spectral analysis method was extended to non linear finite difference schemes based on a quasi-linear assumption. In this assumption, the contribution of the harmonic modes (as a result of the non linear differentiation) are neglected during the calculation of the modified wavenumber of the spatial scheme. Using the semi-discretized dispersion relation of the two dimensional advection and linearized Euler equations, the dispersion and dissipation property of a non linear scheme in two dimensional wavespace can be quantified. Using this framework, a non linear scheme, HYB-MDCD-TENO6 was developed based on the principle that the linear part of the scheme can be optimized for minimum dispersion and dissipation error. Furthermore, the non linear part of the scheme is only activated in the vicinity of a sharp gradient. Through a series of numerical experiments, it was found that the hybrid scheme optimized based on the linearized Euler equation tend to give slightly better results than the one optimized based on the advection equation in some of the numerical experiments. In all cases, it was found that the HYB-MDCD-TENO6 scheme provides better resolution than existing baseline TENO and WENO-JS schemes for the same grid size considered.
Optimisation in open-pit mine planning
Mining has played an important role in the growth of civilisation. Migration, and economic and industrial revolutions have been based on the availability of mineral resources. Open-pit mines are surface excavations created to extract valuable material that is located below the surface by means of an excavation. Open-pit mining is commonly used for extracting near to surface metallic and non-metallic ore deposits. Large scale ore deposits can benefit from the economies of scale offered by the open-pit mining process. Open-pit mines are excavated in phases often referred to as pushbacks or cutbacks, which are mineable units designed to maximise the financial return from the mine. Strategic mine planning is the process that quantifies the economic value of a mining project over the life of the mine. It aims to answer three main questions regarding an ore deposit: 1) what portion of the orebody is both economically and technically feasible to extract; 2) when to extract that portion, or what is the mining sequence and production scheduling; and 3) where to process the extracted material. Due to the complexity of the decisions involved in mine planning, standard industry practice is to subdivide the main problem into interrelated sub-problems that have been studied in the literature as independent problems. During the last 60 years, optimisation algorithms and computers have been used to assist the design of open-pit mines. However, existing models used to provide guidelines for open-pit designs avoid the complexity of modelling practical and operational conditions inherent of the extraction process. Consequently, most of the output from those models requires significant intervention. This thesis addresses a key problem in strategic open-pit mine planning which consists of finding the set of pushbacks, their production schedule and the allocation of the material to an appropriate destination that maximises the discounted cash-flows and satisfies both the practical and operational conditions of the pushback design. This thesis explores different mathematical techniques to formally include these practical and operational conditions into mathematical models that can be solved exactly to optimality.
Ion-specific and pore-size effects on electrochemical performance of graphene-based electrodes and machine learning-assisted device-level design
Electrochemical energy storage devices (EESDs), such as supercapacitors and batteries, are important players in the energy field with the ability to store energy resources and supply continuous electrical energy. A key challenge of EESDs lies in the trade-off between energy density and power delivery. The performance of the EESDs mainly depends on the electrolytes and nanopore size of the electrodes. However, due to the highly complex interplay of the electrolytes (such as ions type, ion concentration, etc.) and electrodes (such as pore size, surface charge property, and electrode thickness, etc.), the designing of EESDs with high energy density and fast power delivery is a very challenging task. This thesis aims to address some of the challenging issues in this field. Taking advantage of the nanoporous graphene-based electrodes with relatively simple and tunable structures, the research reported therein is devoted to studying the types of ion and slit-pore size effects on the electrochemical performance and the development of a new method for the efficient design of energy storage devices. Generally, this thesis presents three research works for energy storage. The first work is to identify the ion-specific effects on electrical double layer (EDL) capacitance, especially the co-ion effects on the EDL performance. Through a comprehensive study of monovalent ions, such as H+, Li+, Na+, K+, Cs+, BMIM+, and Cl-, in 10 nm, 1 nm, and 0.7 nm slit-pores, our research results indicate the intrinsic ion surface adsorption effect plays an important role in determining the EDL capacitance. Secondly, this thesis investigates the interface redox-reaction under nanoconfinement, in which the electrochemical reaction rates of nanoconfined I– and Zn2+ ions were systematically investigated. The results show the ions in a slit-pore of 1nm exhibit a higher electrochemical reaction rate than in a slit-pore of 10nm, which could provide valuable clues for establishing new electrochemical theories related to nanoconfinement. Lastly, the machine learning method is used to establish a comprehensive quantitative relationship between capacitance and structure of supercapacitors, helping to accelerate the optimal design of graphene-based EESDs on an integrated system device level for practical applications. In summary, the comprehensive discussion of the ion type effects under different nanoconfinement levels on EDL capacitance and slit-pore size effects on charge transfer across the interface in graphene-based electrodes could improve our understanding of the charge storage mechanism of nanoporous electrodes for the energy storage community. In addition, the successful demonstration of the machine learning method for the efficient design of supercapacitor on device-level could stimulate similar device-level design ideas in other research fields related to electrocatalysis, capacitive deionization, and nanofluidic devices.
Assessment of scanning-stereo-PIV techniques for turbulent flows
In this thesis, the performance of the scanning-stereo-particle image velocimetry (PIV) technique for three-dimensional (3D) measurement of turbulent flows is studied. Scanning-PIV is an increasingly popular tool for volumetric measurements owing to its ability to deal with high particle seeding density while using only two cameras. The 3D velocity field is computed from the scanning-PIV data using two different methods. In the first (referred to here as the scan-stack), one can apply the standard (single plane) stereoscopic-PIV (SPIV) technique to the images at each scan-position and stack them together (Hori and Sakakibara, 2004; Partridge et al., 2019, for example). While in the second approach (referred to as scan-tomo), the images at all scan-positions are used at once, and particle reconstruction techniques similar to that used in tomographic PIV (TPIV) are used (Lawson and Dawson, 2014, for example). There is not enough evidence in the literature as to which of these two methods is optimal for a scanning-PIV configuration, especially since their processing algorithms are entirely different. Hence, in the present study, the performance of these two methods is studied extensively using numerical simulations of scanning-PIV, which is also complemented by physical experiments. The results of this study inform future experiments of the optimal scanning-method for a chosen set of parameters, namely, the camera frequency, volume scan-time and depth of measurement volume. Numerical simulations of scanning-PIV are conducted using synthetic particle images generated from the direct numerical simulations (DNS) data (del Alamo et al., 2004) of a fully-developed turbulent channel flow, at a friction Reynolds number of 1000. The study is conducted for 30 < Ls/etak < 150 and a fixed spacing between the successive scan-positions, where Ls and etak are the laser sheet thickness and Kolmogorov length scale, respectively. Using the DNS as the reference input, error maps of the turbulent kinetic energy and divergence are generated for both methods. The maps reveal that the scan-stack is more accurate for Ls/etak < 65, while the scan-tomo performs better for Ls/etak > 100, whereas the two methods are comparable for an in-between range of Ls/etak. The comparison made using the topological quantities of the velocity gradient and strain rate tensors also follow the trends of the error maps. Furthermore, these observations from the numerical simulations are validated by conducting scanning-PIV experiments in a confined water tank with an oscillating grid to generate nominally isotropic turbulence. The accuracy of the methods discussed above is also influenced by the experimental conditions such as the alignment between the laser sheet and the calibration target. In this dissertation, an algorithm is proposed for correcting misalignment artefacts in the scan-tomo method. The algorithm is based on the correction principles used for (single plane) SPIV and generates misalignment-free calibration images that can be used for volume-calibration mapping. The existing misalignment-correction schemes for volumetric PIV use sophisticated concepts like that used in the particle tracking velocimetry (Wieneke, 2008, 2018) or TPIV (Lawson and Dawson, 2014). Since the proposed algorithm uses concepts from SPIV, it is simpler to implement than the existing methods. Using a test case simulation, the corrected calibration image is compared with an independently generated misalignment-free image. The root-mean-square difference (= 0.26 pixels) in the marker positions in these two images indicates that the proposed scheme works quite effectively. Finally, the misalignment aspects are studied for (single plane) SPIV as well. This study focuses on quantifying the misalignment effects on the statistics of wall-bounded turbulent flows, as there is a limited understanding of SPIV errors for wall-turbulence in the literature. The study is performed both numerically, using the turbulent channel flow data of del Alamo et al. (2004), and experimentally, by conducting SPIV measurements in a turbulent channel flow at a matched Reynolds number. The study reveals that when compared to the streamwise-wall-normal plane, the statistics of streamwise-spanwise plane measurements are more sensitive to misalignments, meaning that a misalignment-correction is essential in this case. Furthermore, the wall-normal variance is found to be the most affected by the misalignments, while the streamwise variance and the streamwise mean velocity are relatively insensitive.