Infrastructure Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 164
  • Item
    Thumbnail Image
    Investigating the benefits of considering the payload spectra of freight vehicles on pavement costs based on weigh-in-motion data
    Ren, Jing ( 2017)
    Truck traffic is a crucial factor that contributes to pavement damage. The urbanization and globalization promote the higher level of daily consumption for goods, thus increasing the derived demand for freight transport. In some countries, such as Australia, there is a trend towards using larger vehicles, which raised the road authorities’ concern about their effect on pavement because of the lack of pavement maintenance and rehabilitation funding. Therefore, it is important to have a comprehensive understanding of Australian road freight market and optimize the allocation of freight for different types of trucks to reduce the total pavement damage. Weigh-in-motion (WIM) system, which measures and records detailed vehicle information operating on road, was the data source for this study. The data was provided by the State Road Authority of Victoria (VicRoads). This thesis gave out a prototype filtering strategy for WIM database to improve the accuracy. Also, it investigated the efficiency of freight transport by comparing the effect of six-axle semi-trailers and nine-axle B-doubles with regards to pavement performance when carrying various payloads. Mathematical models were developed to help decision makers consider how to distribute the road freight task more efficiently to minimize the pavement damage induced by freight vehicles. A simplified pavement performance prediction model was utilized as a basis to determine the future pavement maintenance & rehabilitation schedules and thus, help compare the long-term pavement treatment costs for different traffic loading scenarios. The outcomes of the research showed that it would have considerable advantages in reducing the overall pavement damage by decreasing the percentage of empty trucks, changing the proportion of freight carried by B-doubles as well as optimizing the payload distributions. In addition, there would be significant benefits in the pavement maintenance & rehabilitation costs over the pavement service life by improving the allocation of freight for trucks.
  • Item
    Thumbnail Image
    Answering queries for near places
    Wang, Hao ( 2017)
    Communication between people conveniently uses qualitative spatial terms, as shown by the high frequency of vague spatial prepositions such as ‘near’ in natural language corpora. The automatic interpretation of these terms, however, suffers from the challenges of capturing the conversational context in order to interpret such prepositions. This research presents an experimental approach to solicit impressions of near to identify distance measures that best approximate it (nuanced by the type of referent, and contrast sets). The presented model computes topological distances to sets of possible answers allowing a ranking of what is near in a context-aware manner. Context is introduced through contrast sets. The research compares the performance of topological distance, network distance, Euclidean distance, Manhattan distance, number of intersections, number of turns, and cumulative direction change. The aim of this comparison is to test whether a metric distance or topological distance is closer to human cognition, challenging the well-known paradigm of ’topology first, metric second’. The comparison results from our experiments show that topological distance appears to be closer to human perception of nearness than other distance measures only at larger scales, while a metric distance (Euclidean distance or Manhattan distance) is closer to how people perceive nearness in smaller scales. People with different sense-of-direction show no obvious differences in their inclination of the seven distance measures with regard to nearness. This research caters for the interpretation of ‘near’ with granular and local context, and provides a cognitively inspired method to answer near-queries automatically. The findings apply to urban environments and may need further verification in less structured environments.
  • Item
    Thumbnail Image
    Estimation of root-zone soil moisture using thermal infrared data
    Akuraju, Venkata Radha ( 2017)
    This thesis focuses on Root-Zone Soil Moisture (RZSM) estimation using Thermal Infrared (TIR) observations. RZSM plays an important role in hydrological modelling and agricultural applications. Conventional point-based measurements such as gravimetric and TDR measurements may not be useful for agricultural applications to understand the spatial and temporal behaviour of soil moisture. Microwave remote sensing is a useful tool to retrieve soil moisture information in large scales, but their retrievals are limited to surface soil moisture and sparse vegetation conditions. Thermal infrared remote sensing is an alternate approach to predict soil moisture until root-zone, even under dense vegetation conditions and in high spatial and temporal resolutions. Since optical and thermal observations linked to soil water status of deeper layers, developing a model to estimate RZSM is particularly important for hydrological modelling. Being able to predict root-zone soil moisture using TIR observations, understanding the interactions between surface fluxes and soil moisture is necessary. This research builds on understanding the links between ET derived from TIR data and surface to root-zone soil moisture in dryland wheat field, Dookie experimental site, Victoria, Australia. In the first step of this research, a hydro-meteorological dataset has been collected for created for three cropping seasons. By monitoring two cropping seasons, it is shown that there exists a strong relationship between ET and soil moisture in water-limited conditions. The relationship between ET and RZSM is highly conditional based on net radiation, crop growth stage and rainfall distribution. More appropriate linkages between ET and available water fraction was found by incorporating root depth and density simulated from Agricultural Production Systems sIMulator (APSIM) model. A new model, CWSI (Crop Water Stress Index) based on the theoretical limits obtained from canopy temperature and air temperature is developed by considering the impacts of root depth variation, growth stage. The sensitivity of CWSI and RZSM from two cropping seasons is explored and compared with another cropping season. Cross-validation results demonstrate that the linear model can predict RZSM with an average error of 3.9% and 5.3% in different cropping seasons. The proposed method is also applied to another root-zone soil moisture dataset collected during 2002-04 cropping seasons in a cornfield site in the Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) site in the U.S. Validation results showed that the model produces reasonable RZSM estimates except for the high rainfall distribution during cropping seasons. Overall, this research demonstrates the links between surface fluxes/TIR observations and root-zone soil moisture. The implications of the close links contribute towards reliable root-zone soil moisture estimations in large scales using thermal infrared observations.
  • Item
    Thumbnail Image
    Participants’ power asymmetry in public infrastructure projects
    Zarei, Hamzeh ( 2017)
    Many large infrastructure projects around the world significantly exceed their budgets and take longer than expected to complete. The cost overruns and delays in such projects cause significant economic and social challenges around the world and in Australia. This research focuses on seven large infrastructure projects in Victoria, Australia to better understand why these projects fail to deliver what government and the public expected. This thesis answers the question; “why did the projects fail to meet expectations” and “how could this be avoided in future”? The explanations on infrastructure project delivery failure have been covered by many studies, some decades old. These include, among other things, exaggerating benefits, overlooking risks, and unrealistic assumptions promising benefits that fail to materialise. These explanations do not explain the reasons why large infrastructure projects continue to fail. Large infrastructure projects are complex and contain many stakeholders, including central agencies, delivery agencies, government departments, construction companies and contractors. Based on thorough analysis of a Parliamentary inquiry of the Committee of the Public Accounts and Estimates the research identified the notion of power as an important factor in investigated infrastructure projects. The analysis found that the interplay between the stakeholders involved in a project is affected by the power distribution among the stakeholders. A new concept of informal authority is postulated to provide a consistent explanation of how a delivery agency’s self-interest in the presence of an asymmetric distribution of power may result in project failure. The research concludes that power asymmetry is a critical success factor in public infrastructure and makes suggestions for its management and control that would improve project outcomes.
  • Item
    Thumbnail Image
    A bio-inspired composite system for protecting critical structural components from extreme loads
    Ghazlan, Abdallah ( 2017)
    Accidental and deliberate loads on civil and military structures continue to cause severe damage worldwide, along with catastrophic losses of human life. The significance of this problem is evidenced by high government spending on counter-terrorism, which has steadily increased over the past decade to several billion dollars. There is little evidence on the efficiency of such expenditures, which highlights the need for scientific intervention. This study seeks a solution from nature, which has optimised its structure over millions of years of evolution to survive extreme loads that arise from the harsh conditions of its environment. The attractive feature of natural structures is that they tend to minimise their weight by employing highly brittle minerals during their assembly but the overall structure boasts a fracture toughness that is several orders of magnitude greater. This makes natural structures highly attractive to the protective structural engineering discipline. The overarching aim of this research project is to develop a lightweight bio-inspired composite system (BCS) for protecting critical structural elements from extreme loads by identifying and mimicking the key strengthening and toughening mechanisms facilitated by the structural features of nacreous seashells. This aim spans three disciplines, namely extreme loading, biomimicry and computational geometry. Firstly, a comprehensive literature review is conducted on these three key areas to understand: 1) the physics of blast loading and shockwave dynamics; 2) the key toughening mechanisms that are employed in the armour system of nacreous seashells to protect their soft tissues from extreme loads; and 3) computational geometry techniques that can be employed to mimic several key features of these natural structures and translate them to protective structural systems. Building on this knowledge, a computational framework is developed to generate nacre-mimetic composite structures in a format that is recognised by computer aided design (CAD) and finite element programs. This framework can automatically construct and manipulate the geometry of the nacre-mimetic composite structure, which saves significant time by automating the modelling and manufacturing process. The framework utilises geometry manipulation and meshing functionalities that are already implemented in popular software packages, and implements additional subroutines where specialised functions are not available. For example, a specialised subroutine is required to automatically insert cohesive elements between polygonal bricks to model the nacre-like mortar. The geometry was developed to be transferrable to a CAD format such that the nacre-mimetic structure can be manufactured using rapid prototyping technologies such as 3D printing or laser cutting. Several analytical models were subsequently developed at the unit cell level to gain preliminary insight into the parameters responsible for the superior load transfer efficiency found in nacre. By extending current analytical studies conducted by researchers in the area of biomimicry, which mainly investigated the behaviour of nacre’s brick and mortar structure under planar tension, a shear lag approach was employed, which assumes that the tensile force applied to the bricks is transferred via shear through the mortar. As such, parametric studies were conducted to investigate the significance of the interfacial geometry and the overlap length between the bricks in adjacent layers, with the objective of quantifying the effects of these features on the energy absorbing capacity and load transfer efficiency in the composite structure. This preliminary study showed that the waviness of the interface improves the shear transfer efficiency in the mortar and maximises the load sharing efficiency between the bricks and the mortar. To this end, a more comprehensive numerical model was developed to mimic nacre’s polygonal brick and mortar structure more closely and account for fracture in the mortar, which has been observed experimentally. Voronoi diagrams, which are well-known in computational geometry, were employed to automatically generate different nacre-mimetic polygonal structures. This facilitated several key parametric studies for understanding the behaviour of the nacre-mimetic composite under quasi-static loading, whereby the numerical model was validated using experimental data available in the literature. These studies showed that the constitutive behaviour of the ductile mortar was responsible for the high toughness of nacre, which accounted for the hardening phase observed in the experimental stress-strain curve. This result was contrasted to the abovementioned simplified unit cell model, which indicated that the waviness of the interface and the overlap length between adjacent bricks played a key role. The shape of the bricks was also found to be significantly influential on the crack deflection and arrest capabilities in the mortar. The voronoi approach was again employed to investigate the dynamic behaviour and failure modes of a bio-mimetic polygonal brick and mortar panel under blast loading using finite element modelling techniques. Several parametric studies were conducted to establish the influence of different geometric and material parameters on the damage and load distribution in the composite. This model utilised the automated geometric construction and manipulation capabilities of the computational framework mentioned earlier. It was found that an increased number of layers in the brick and mortar structure increased the energy dissipated throughout the composite, which was much more prominent than a monolithic panel of equal mass. A bio-inspired polygonal brick and mortar composite structure was then manufactured from medium density fibreboard (MDF), which was bonded by a ductile polyurethane adhesive. The single edge notched specimen (SENT) utilised several key features found in nacre, namely the polygonal brick and mortar structure, reinforcing bridges between the bricks and the soft ductile adhesive bonding. The MDF panel was manufactured by utilising the automated computational framework mentioned previously and its behaviour was investigated under quasi-static loading. Compared to a brittle monolithic MDF specimen of equal mass, the nacre-mimetic composite showed significant improvements in ductility and energy absorbed. This was achieved by deflecting cracks away from the brittle bricks and into the ductile polyurethane mortar, which did not fracture at the conclusion of the test. The high potential of this study in terms of protective structures is evident because brittle non-structural materials that are abundantly found in structures, such as ceramic tiles, can be toughened to protect critical structural elements from extreme loads.
  • Item
    Thumbnail Image
    Towards improved rainfall-runoff modelling in changing climatic conditions
    Fowler, Keirnan ( 2017)
    Rainfall-runoff models are useful tools in water resource planning under climate change. They are commonly used to quantify the impact of changes in climatic variables, such as rainfall, on water availability for human consumption or environmental needs. Many parts of the world are likely to see changes in future climate, and some regions are projected to be substantially drier, possibly with threatened water resources. Given the importance of water to the economy, environment, geopolitical stability and social wellbeing, reliable tools for understanding future water availability are vital. However, literature would suggest that the current generation of rainfall-runoff models are not reliable when applied in changing climatic conditions. Simulations of historic case studies such as the Millennium Drought in South East Australia indicate that models often perform poorly, underestimating the sensitivity of runoff to a given change in precipitation. Many hydrologists have assumed that these deficiencies are due to the model structures themselves - that is, the underlying model equations. However, it is possible that the explanation is broader, and can only be understood via holistic approaches that examine the entire modelling process. This research, presented in four parts, aims to understand and improve various elements of this process. Part 1 investigates whether poor model performance is due to insufficient model calibration and evaluation techniques. An approach based on Pareto optimality is used to explore trade-offs between model performance in different climatic conditions. Five conceptual rainfall-runoff model structures are tested in 86 catchments in Australia. Comparison of Pareto results with a commonly used calibration method reveals that the latter often misses potentially promising parameter sets within a given model structure, giving a false negative impression of the capabilities of the model. This suggests that existing model structures may be more capable under changing climatic conditions than previously thought. The aim of Part 1 is to critically assess commonly used methods of model calibration and evaluation, rather than to develop an alternative calibration strategy. The results indicate that caution is needed when interpreting the results of differential split sample tests. Having demonstrated deficiencies in commonly used calibration methods, Parts 2 and 3 examine alternative calibration strategies. The aim is to identify calibration metrics capable of finding parameter sets with robust performance, even if climatic conditions change compared to the calibration period. Part 2 follows a three-part process to identify which metrics (if any) can identify the robust parameter sets using pre-change data only. The three parts are: randomly generating a large ensemble of parameter sets; identifying parameter sets in the ensemble that provide robust simulations both before and after a change (drying) in climatic conditions; and calculating multiple performance metrics for each ensemble member. Traditional objective functions are trialled, along with less common indices such as the degree of replication of observed hydrologic signatures. The most promising metrics are then tested more rigourously in Part 3, which uses guided search algorithms selected in accordance with metric type (objective function or hydrologic signature), including: calibration by matching of hydrologic signatures (using the DREAM-ABC algorithm), optimisation of global objective functions (using the CMA-ES algorithm), and hybrid approaches blending global objective functions with signatures (using the Pareto approach AMALGAM). The results indicate considerable scope for improved calibration, relative to commonly used approaches. Metrics that consider dynamics over a variety of timescales (eg. annual, not just daily) are more promising, as are objective functions using the sum of absolute errors rather than the sum of squared errors. The key recommendations of Part 2 and 3 are to avoid `least squares' approaches (such as optimising the NSE, RMSE and similar approaches like the KGE) and adopt sum of absolute error and/or metrics considering a variety of timescales, wherever simulations of a drying climate are required. Parts 1-3 confirm the importance of calibration methods when modelling under changing climates. This raises the question: in what circumstances should the focus be on improving calibration methods versus improving model structures, or alternatively on other issues such as poor data quality? Although recent literature has presented various tools for model evaluation - usually using variants of the Differential Split Sample Test (DSST) - there is less focus on such questions. Thus, a modeller whose model has failed the DSST is largely without guidance as to next steps. Part 4 provides guidance for this question within a framework based on Pareto optimality. Similarly to Part 1, modelling objectives are set over multiple historic periods with contrasting climatic conditions. The framework allows cases of DSST failure to be categorised as either: (a) cases of model structural failure, where no parameter set in a model structure can meet all modelling objectives in all periods, indicating the need for structural changes or improved data; or (b) cases where modelling objectives are attainable by the model structure, but the DSST calibration method failed to find the right parameter set(s). The framework outlines separate steps to follow for each of the above categories. Many steps in the framework can be populated by existing sensitivity analysis techniques, but new techniques are designed for some steps, such as the diagnosis of structural inadequacies by analysis of `drift' in hydrologic signature error as climatic conditions change. The framework is demonstrated using a case study from Australia and the IHACRES model structure. Limitations of inferring future hydrologic processes from historic data are also discussed. This research underscores the joint importance of model structures and calibration methods when modelling changing climatic conditions, providing practical guidance for holistic improvement of the modelling process. By prompting more credible runoff projections, it is hoped that this research leads to more robust decisions that safeguard the future of water resources for people and our planet.
  • Item
    Thumbnail Image
    Witnesses of events in social media
    Truelove, Marie ( 2017)
    Social networks now rank amongst the worlds most popular websites. Academics and industry alike recognise opportunities provided by the vast quantities of user-generated content. Opportunistically harvesting information to derive event intelligence is now actively sought by numerous applications including emergency management and journalism, and pursued by research interests including crisis informatics and new event detection. However, with these opportunities numerous challenges are recognised. Micro-blogging streams are characterised as noisy, and relevant information for any task is typically a fraction of that posted. Additionally, the possibility of ambiguous, misleading, or fake content about events erode trust in the information that is retrieved. Consequently this research is motivated to pursue micro-blogs that are witness accounts of events, and the micro-bloggers who post them. A literature review of social media research related to event witnessing identifies that witness accounts are sought by numerous domains. Their existence can be used to confirm an events occurrence, and the information they contain can improve situation awareness during an emergency, and provide credibility to breaking news stories. However, also identified are varying definitions of witnessing concepts and gaps in knowledge and solutions. Lacking are fundamental definitions of witnessing evidence and counter-evidence that distinguish inferences by observation, experience, and proximity, in multi-modal forms. Current location inference approaches for micro-blogs and micro-bloggers, are resolved at scales inadequate to infer human observation or experience. Additionally, lags in research progress are identified. For example research of micro-blog image category classification in comparison to micro-blog text classification, and exploration of micro-blogger posting history for contextual understanding in comparison to individual micro-blogs. The hypothesis of this research is that micro-bloggers who are witnesses of events, can be identified by evidence contained in their micro-blogs. To test this hypothesis an incremental experimental approach is adopted comprising of three stages, each building on the foundation achieved by the previous. Each stage balances the pragmatic requirement of automation with in-depth understanding by detailed human analysis of case study events with varying characteristics. The first stage seeks to identify and define inferential evidence of event witnessing in micro-blogs, including witness accounts. The second stage demonstrates the automatic extraction of witnessing evidence and counter-evidence from micro-blog content. And the final stage demonstrates the combination of extracted evidence and counter-evidence to identify micro-bloggers who are likely witnesses and test the consistency and certainty of this identification. Experimental outcomes include advanced original models of text, image and geotag evidence that support inferences of witnessing, and counter-evidence to test the status of potential witnesses. The usage of counter-evidence to identify conflict in a micro-blogger's posting history represents a new approach towards information assessment for this purpose, supporting the interrogation of evidence for many applications. The combination of evidence from geotags, images, and text, from a micro-blogger's complete posting history, is demonstrated to support the identification of a greater number of evidence and potential witnesses in comparison to baseline methods that consider individual geotag or text content only. This contributes to the alleviation of relevant information sparsity. A demonstration of automatic evidence extraction includes a new application of image category classification by the bag-of-words procedure. Experiments make use of case studies from a range of event types that contributes towards generalisation of models. The knowledge gained enables the introduction of a framework of processes for identifying potential witnesses of events by evidence they post to social media.
  • Item
    Thumbnail Image
    Evaluation of innovation in building, construction, and infrastructure projects
    Maghsoudi, Soroush ( 2017)
    Innovation is at the heart of today’s competitive economy. The infrastructure industry of a nation plays a crucial role in its economic development and having understood innovation, and its impact is very important in the current turbulent economic environment. Innovation and infrastructure are both referred to as two main factors to increase productivity and to become more competitive internationally, however, innovation is difficult to quantify, particularly in advance. Development of robust techniques to predict the impact of innovation, whilst elusive, remains a priority for most governments and organizations. Innovation is a complex and multifaceted phenomenon. The aim of this study is to develop a theoretical framework and a tool to assist building and construction industry practitioners in the evaluation of innovation and its outcomes. To achieve this purpose, a systematic literature review was conducted and then a set of twelve case studies was analysed. From the literature review and case studies a framework to evaluate innovation was developed. Grounded theory was used to apply the framework to current building projects undertaken by the author. The literature review revealed fragmented definitions of innovation. There was no consensus on innovation measurement. This thesis develops a comprehensive framework with a holistic attitude in mind in order to consider as many as possible interacting elements of the innovation process. This resulted in the development of a tool to measure the impact of innovation in nine domestic and commercial building projects in Melbourne Australia. This area of research is new and under-researched in the context of infrastructure and building projects.
  • Item
    Thumbnail Image
    Context-aware and time-aware indoor evacuation
    Zhao, Haifeng ( 2017)
    Upon sudden occurrence of disasters such as fire, earthquakes, or floods, evacuation is the first and foremost needs to get people out of the disaster area. Context awareness and time awareness is of significant importance especially for such a life-critical activity as evacuation. Current indoor evacuation prominently relies on stationary exit signage and emergency maps for providing recommended escape routes. However, such signage or maps are static and reflect no updating of the environment. Consequently, it is inevitable that the recommended routes may have been blocked by the disaster. An evacuee that follows such a route will have to seek alternative routes, wasting time or in the worst case failing the evacuation. Full awareness of the indoor structures as well as the real-time risk of the buildings to be evacuated from is supposed to be beneficial. In order to achieve a full awareness of the real-time situation of the environment, the omnipresent sensors in modern buildings and the pervasively used mobile devices are potentially to be utilized. By integrating sensors and the building structure, real-time risk information of the environment can be monitored and evacuees are kept updated with the real-time information. Personalized evacuation route based on the real-time situation of the environment can be provided to each evacuee via their mobile devices. A framework integrating sensing and routing has been provided in this thesis. As the first contribution of this thesis, the framework integrating sensing and routing has been investigated via simulation and experiment; results indicate that such centralized evacuation facilitated with full situation awareness is capable of saving more lives in evacuation than without such centralized evacuation framework. Taking fires as example, the evolution of a fire disaster is a spatio-temporal process, and its impact on the evacuation route graph is also spatially and temporally evolving. Any real-time conditions of the environment at a specific time instance is just a snapshot of the changing conditions of the environment. An evacuation route based on the real-time conditions of the environment at a current time instance only guarantees that this route is safe at the current time instance, but may be blocked in the next moment, in which case a new evacuation route based on real-time information will need to be computed. The changes of a planned evacuation lead to a waste of time, and in the worst case, causes failing in the evacuation. Taking the temporal differences of the situation awareness into consideration, an optimal evacuation route should guarantee the passability not only at the current moment but also in the near future. This thesis then integrates timing, and also tests whether foresight is beneficial for evacuations. The second contribution of this thesis is to verify that integrating timing with prediction generally improves evacuation performance, and the improvement shows an dependence on the accuracy of the prediction. Centralized evacuation systems are of high efficiency because of their global situational awareness; however, such centralized evacuation systems share at least three shortcomings. First, the central infrastructure may not always exist in an arbitrary building. Second, either the communication channels between the central infrastructure and sensors, or the communication channels between the central infrastructure and mobile devices may be blocked due to a failure or damage of the central infrastructure. Third, such centralized evacuation systems are building specific so that the central infrastructure as well as the settings of the mobile devices are not seamlessly transferable to another building. Decentralized evacuation has been proved to be effective for evacuation in the absence of any central infrastructure or in case that the centralized evacuation systems collapse. Decentralized evacuation is also superior in its scalability and robustness against any failure of the central infrastructure. To investigate decentralized evacuation it is assumed that no central infrastructure is available. Evacuees are supposed to have full awareness of the environment before disasters but only rely on self-exploration and peer-to-peer communication via a (hand-held or head-held) mobile device when the disaster happens. Without real-time updating from a central infrastructure, situation awareness of the environment is prone to being out of date. Decision making in a possibly changed environment without awareness of the time when that knowledge is lastly updated is problematic. This thesis contributes to situational awareness by developing a time-aware routing model, fading memory, for decision making in dynamically changing environments. Fading memory values not only the knowledge that has been acquired but also the time when that knowledge was lastly updated by trusting more the knowledge recently explored and less the knowledge explored a long while ago. This thesis tests this model; experiment results indicate this mechanism generally benefits evacuation performance. In addition to the unreliability of the out-of-date knowledge, what makes decentralized evacuation more challenging is that people are sometimes required to evacuate a place which they are unfamiliar with and have incomplete awareness of the environment before the event. Decentralized evacuation in unfamiliar environment is challenging in that it involves not only a critical evaluation of the acquired knowledge but also an exploration of the unknown environment if no evacuation route can be derived from existing knowledge. Relaxing the constraint of a full awareness of the environment before the disaster is of significant value in that the mobile device will rely on no infrastructure and is thus safe to be seamlessly transferred to arbitrary environments. A decentralized evacuation paradigm with incomplete prior knowledge has been developed and a fading memory model for evacuation with incomplete prior knowledge has been verified to be beneficial for decentralized evacuation, which composes the fourth contribution of this thesis. In a decentralized evacuation paradigm, evacuees are guided by smartphones acquiring environmental knowledge and risk information via exploration and knowledge sharing by peer-to-peer communication. Peer-to-peer communication, however, relies on the chance that people come into communication range with each other. This chance can be low. To bridge between people being not at the same time at the same places, this thesis then suggests information depositories at strategic locations that collect the knowledge acquired by the smartphones of evacuees passing by, maintain this information, and convey it to other passing-by evacuees. Experiments implementing these depositories in an indoor environment show that integrating depositories improves evacuation performance: It enhances the risk awareness and consequently increases the chance that people survive and reduces their evacuation time. For evacuating during dynamic events, deploying depositories at staircases has been shown more effective than deploying them in corridors. Overall, this thesis contributes to both centralized and decentralized evacuations from context-awareness and time-awareness perspectives. The main research method is to use agent-based simulation to simulate the complex evacuation process embedded with different evacuation strategies so as to analyze and compare the system behavior, while leaving aside any study of human behavior. Strategies that have been investigated include whether a context-awareness generated by integrating sensor graphs and route graph benefits evacuation outcome, whether prediction benefits evacuation outcome, whether trusting less the aged knowledge leads to better decision making in dynamic environments, and whether an information depositories benefit evacuation outcomes. Such strategies have been verified here to be effective for evacuation; they also have valuable implications on a broad range of activities in dynamic environments.
  • Item
    Thumbnail Image
    Cooperative localisation of unmanned aerial vehicles using low cost sensors
    Goel, Salil ( 2017)
    The reliance on location and location based services in everyday life is undergoing tremendous growth as society progresses towards an increasingly connected world. Location awareness plays an important role in many applications such as navigation, mapping, exploration, emergency response, surveillance, search and rescue, etc. and forms an integral component of almost all modern technologies some of which include connected vehicles, Intelligent Transport Systems, Unmanned Aerial Vehicles (UAVs), Internet of Things (IoT) and smart cities. UAVs are increasingly being used in the above mentioned applications as well as in other domains such as agriculture and insurance. The use of UAV in any applications is contingent to precise and continuous localisation of the UAV platform. Till today, GNSS has been the primary source for achieving a precise localisation solution. However, the performance of GNSS is subject to the availability of clear outdoor environments and degrades substantially in occluded environments such as urban canyons and forests. A new paradigm of positioning is emerging that utilises cooperation and information sharing among UAVs as well as existing infrastructure and other platforms (or nodes) for localisation and is termed as ‘Cooperative Localisation’. Information sharing among nodes can help overcome some of the challenges including precise and continuous positioning in challenging environments such as urban environments, forests etc. Further, cooperative localisation may help in improving the positioning accuracy and is required for the deployment of UAV swarms in various applications. Although the advantages of cooperative localisation have been apparent, the performance of a cooperative localisation system of a swarm of UAVs, impact of various components on its performance and its advantages and limitations have not been evaluated in real world conditions. This research develops the mathematical framework and a prototype of a cooperative localisation system for a swarm of UAVs using GNSS, inertial and Ultra Wide Band sensors and performs an extensive performance analysis using multiple real world experiments. Notable developments achieved in this research include design and development of a new cooperative localisation prototype for a UAV swarm network, a general framework for cooperative localisation in heterogeneous and homogeneous cooperative networks using centralised architecture, development and evaluation of a new distributed EKF based estimation algorithm that is less computationally expensive than existing algorithms. Following a critical analysis of the existing literature to identify the research gaps, the details of the developed prototype are presented. Further, a performance analysis of the on-board sensors is performed to establish the performance parameters that are needed for information fusion. This is followed by the development of a general mathematical framework for cooperative localisation in centralised and distributed architectures for both heterogeneous and homogeneous networks. This framework is used to perform a sensitivity analysis, to establish performance bounds and study the impact of various factors on the overall performance of the proposed system. This is followed by the experimental evaluation of the developed prototype in real world environments. From these experiments, it is demonstrated for the first time that a cooperative UAV localisation system based on low cost sensors is capable of achieving localisation accuracy of the order of ∼ 3 − 5 m in partially GNSS denied environments, when communication among UAVs is consistent. Through these experiments, the effect of the quality of communication on the localisation accuracy of UAVs is demonstrated and it is found that a consistent communication helps maintain the localisation accuracy to about 3 − 4 m in partially GNSS denied environments. Furthermore, it is demonstrated using experiments that cooperative localisation can help improve the positioning accuracy even in GNSS available environments. It is found from theoretical analysis that the effect of loss of GNSS measurements on the localisation performance of a node in a swarm can be minimised by altering the network geometry. A complete analysis of the limitations of the developed system and some suggestions for the future work are also presented. Through this research, the performance of a cooperative UAV network is evaluated in real world environments and its limitations are highlighted.