Infrastructure Engineering - Theses
Now showing items 1-12 of 395
Bio-inspired cross-laminated timber for protective structural applications
Major blast events have occurred annually in several regions around the world. Accordingly, building codes, design standards and structural design recommendations are of paramount importance to protect occupants and property against unpredictable blast events. Cross-laminated timber (CLT) has recently emerged as a sustainable and lightweight engineered wood product. CLT offers several advantages as a construction material, in terms of both mechanical properties and environmental protection, including a high stiffness-to-weight ratio, a high two-way stiffness, and a low embodied carbon footprint. The increasing use of CLT in structural members combined with emerging threats highlight the importance of improving its resilience to blast loads. The study on the performance of CLT under blast loadings is significant to protect important structural elements and improve their resilience to blast loads. CLT possesses a lamellar structure, similar to that of marine seashells such as conch shells. A conch shell is primarily composed of brittle minerals (over 99% aragonite) but boasts a high fracture toughness due to its unique lamellar structure. By taking inspiration from the striking resemblance between the lamellar structure of the conch shell and CLT, this research aims to develop an innovative bioinspired CLT structure with superior resilience to blast loadings. Specifically, three main research areas are reviewed, namely blast loading, bio-inspired armour systems and cross-laminated timber. A comprehensive review is conducted on these topics to highlight the significance of protective structures against blast loading, the toughening mechanisms of biological armour systems, and the need for enhancing the performance of CLT under blast loadings. The review emphasises the lack of studies on the behaviour of CLT under blast loadings to improve its toughness and resilience in an explosion. Moreover, a striking resemblance between CLT and biological structures such as conch shell offers innovative solutions for increasing the toughness of CLT through bio-mimicking techniques. With this knowledge, the feasibility of mimicking the micro architecture of the conch shell on a larger scale to enhance the toughness of conch-like CLT is investigated. Programable 3D printing instructions were used to manipulate the 3D printer to develop tough conch-like prototypes. The prototypes were tested under single-edge notched tension to investigate their fracture behaviour. Then, a numerical model was developed and validated using these experimental data and an analytical solution. The model employed to examine the toughening mechanisms in the innovative proof of concept conch-like structure. A parametric study was also conducted to investigate the effect of different parameters on the toughening behaviour of the conch-like prototypes. A finite element (FE) model was proposed to simulate the behaviour of CLT under both quasi-static and dynamic loadings. The FE model was validated using experimental results and subsequently employed to simulate the bio-inspired CLT panel under both quasi-static and blast loads. An analytical solution was also proposed to capture the behaviour of CLT panels under blast loadings to validate the FE model. This validated FE model was used to conduct a numerical study on the performance of bio-inspired CLT under blast loading. In this study, the lamellar arrangement in the conch shell structure was mimicked to improve the toughness of a conch inspired CLT panel subjected to blast loadings. Several key parameters from the conch shell were also mimicked to enhance the toughness of CLT panels, namely the lamellar arrangement and the interlocking mechanisms. These bioinspired CLT panels were investigated by conducting numerical simulations of four-point bending tests. As such, several design recommendations were provided to enhance the performance of the conch-inspired CLT including changing the cross-section of timber planks in the middle layer of a CLT panel, introducing carbon fibre composite layers for ductility improvement, using pins to enhance interlocking mechanisms and adjusting the mechanical properties of the bonding adhesive. The bioinspired CLT panel was shown to exhibit several performance benefits over its benchmark counterpart, namely increased stiffness, strength and toughness. Finally, the conclusions of this research project and directions for future work are also provided.
Traffic State Estimation and Traffic Signal Control Optimization in a Connected Transport Network
Urbanization and population growth intensify the problems associated with traffic congestion in metropolitan areas all over the world. Therefore, researchers always seek to find innovative solutions to enhance the performance of transport systems in terms of safety, mobility, and environmental sustainability. Consequently, the optimal design of signal control parameters concerning real-time traffic congestion has been the subject of extensive research for many years. The states of the art traffic signal control methods mainly use the data from infrastructure-based sensors such as loop detectors and video cameras. However, these sensors are mainly spot detectors and are only able to sense the presence of vehicles in specific parts of the network. Therefore, they are unable to provide an overall insight into the traffic situation in the whole network. As a result, the infrastructure-based signal controllers are not fully adaptive, and the acquired data from these types of sensors are only applied to make some minor changes in the predesigned signal plans. Furthermore, the infrastructure-based sensors are associated with some drawbacks such as high installation and maintenance cost as well as inaccuracy and rate of failure. This research is motivated by the recent advancements in communication technologies as well as intelligent transportation system applications. These technologies make it possible for vehicles equipped with onboard units (OBUs) to exchange their information such as position, speed, acceleration/deceleration with other equipped vehicles, and roadside units (RSUs). The collected data by the RSUs can then be used to realize the spatial real-time traffic situation in the network based on which the traffic signal controllers can make smarter and more informed decisions. Given the enriched data obtained from connected vehicles (CVs), the traffic signal control problem can be formulated based on data-driven and mathematical methods to provide an optimal signal control plan. Reviewing the literature concerning signal control strategies in a CV system, the following three main research gaps are identified and addressed in this research: Most of the current literature on traffic signal control in a connected vehicle system is tailored for a condition in which all or the majority of vehicles are connected. However, the generalisability of this type of research on this issue is problematic. Since these algorithms cannot work appropriately when there exists a mixture of ordinary and connected vehicles. During the last decade, a considerable amount of literature has been published on traffic signal control in a connected vehicle environment. However, only a few are concerned about the network level to consider the coordination between intersections. Most of the algorithms are simply designed for a single intersection without any consideration of the interaction between adjacent intersections. The majority of the existing network-wide signal control algorithms suffer from computational complexity which prevents them to be real-time implementable. To address the first research gap, this research develops data-driven estimation methods to estimate the traffic states based on the data acquired from a limited number of connected vehicles in mixed traffic of connected and ordinary vehicles. To deal with the second research gap, a rolling horizon optimization strategy is developed to determine the optimal signal plans of all intersections for the next time step considering the current estimated traffic situation from connected vehicles. The third gap in the literature is also addressed by introducing a network decomposition algorithm to reduce the computational complexity of the optimization problem to be real-time implementable. This study contributes to the literature in the following areas: Data-driven traffic states estimation algorithms are proposed to estimate the traffic condition even when only a limited number of vehicles are connected in a transport network (say at least 30%). Traffic state estimation algorithms in this research have an aggregated approach and do not record the vehicle trajectories in any form. Therefore, the privacy of drivers in charge of connected vehicles is protected. Connected vehicle data is the only required input for estimation methods and the proposed algorithms do not require the information of any infrastructure-based sensors. The flow estimation algorithm is also extended to fuse the data from connected vehicles and Bluetooth sensors to provide accurate traffic estimation results in situations with very low market penetration rates of connected vehicles. A rolling horizon optimization strategy is applied in this research to determine the optimal timing plans of all traffic signals in a network of intersections. A network decomposition algorithm is introduced to split the network into several smaller subnetworks and convert the centralized signal control optimization problem to a semi-centralized approach. The suggested semi-centralized control strategy has a significantly reduced computational time in comparison with its centralized counterpart. The affordable computational time makes the model applicable for real-time implementation. The integration of estimation and optimization algorithms results in better performance of the proposed traffic signal plan (in terms of mobility indexes such as travel time, number of stops, average speed, queue length, and emissions) compared with a base case actuated coordinated signal plan where the penetration rate of the connected vehicles is 30% or more.
Ternary spatial relations for error detection in map databases
The quality of data in spatial databases greatly affects the performance of location-based applications that rely on maps such as emergency dispatch, land and property ownership registration, and delivery services. The negative effects of such dirty map data may range from minor inconveniences to life-threatening events. Data cleaning usually consists of two steps - error detection and error rectification. Data cleaning is a demanding and lengthy process that requires manual interventions of data experts, in particular where for complex situations involving the consistency of relationships between multiple objects. This thesis presents computational methods developed to automate the detection of errors in map databases and ease the demand for human resources in error detection. These methods are intrinsic, ie., depend only on data being analysed, without the need for a reference dataset. Two models for ternary spatial relations were developed to enable the analyses not possible with existing binary spatial relations. First, the Refined Topological relations model for Line objects (RTL) examines whether the core line object is connected to its surrounding objects on both or only one of its ends. This distinction is particularly important in networks where connectedness determines the function of the object. Second, the Ray Intersection Model (RIM) casts rays between two peripheral objects and uses the intersection sets between these rays and the core object to model its relation to peripheral objects. This provides a basis for reasoning about the core object being between peripheral objects. Both models have been computationally implemented and demonstrated on error detection tasks in OpenStreetMap. The case studies on data for the State of Victoria, Australia demonstrate that the methods developed in this research are effectively detecting errors that could so far not be automatically identified. This research contributes to automated spatial data cleaning and quality assurance, including reducing experts' workload by effectively identifying potential errors.
Place-related question answering: From questions to relevant answers
In everyday communications, people talk about space by referring to places. While the common sense notion of place is understandable to humans, formalising place in a computational model remains a challenging issue. The strong context dependency, diverse metaphorical uses, indeterminacy of boundaries, and vernacular reference use are major challenges in making place knowledge digestible for computers. This research aims to utilise domain knowledge to study place-related questions and their corresponding answers, and to develop models and methods to answer the questions. In the context of place-related question answering, this study investigates what people expect from computers to understand about places, and how these place-related questions are answered in human-generated responses. First, a place model is designed for the question answering purpose using the collective domain knowledge extracted from literature. Later, the model is used to characterize the platial information in place-related questions and their human-generated answers. In the next step, the natural language questions are translated to GeoSPARQL queries to enable the spatial analysis for answering place-related questions. Finally, templates for answering where-questions are proposed to generate relevant responses similar to human-generated answers. The results of this study show that domain knowledge can be used to improve current methods of place-related question answering. Using domain knowledge, an encoding method is devised that can characterise large question answering corpora with minimal supervision. The encoding results are used to identify descriptive patterns inside the questions and answers. In the next step, a novel approach is designed using domain knowledge and object-based conceptualization of place to translate natural language questions to GeoSPARQL queries. The novelty of the approach is mainly to (1) use domain knowledge and avoid reinventing new terms, and (2) utilise FOL statements as the intermediate representation which can be later translated not only to GeoSPARQL but any other formal query languages with minimal efforts. The method is tested using the Geospatial Gold Standard dataset, and the results show significant improvements in extracting information and translating questions to queries in comparison to the state-of-the-art approaches. Finally, the relevance of answers to where-questions is investigated using templates of generic information (i.e., type, scale and prominence). The results show that generic representations can be used to characterise answers in a few frequent patterns and also to study relevance of answers to the questions. Moreover, the extracted knowledge can be captured using sequence prediction methods in a machine digestible manner. The results of this study can be used to test the relevance of machine-generated responses or to generate automatic responses similar to human-generated answers. Overall, this thesis contributes to the domain of geographic question answering with a focus on geographic places. The results of this study can be used in question answering systems to analyse and classify the questions, generate queries and formulate relevant responses. The results of this study show the importance of domain knowledge in improving the performance of existing question answering systems, and also provide useful insights about human answering behaviour.
Crowd Dynamic Modeling and Simulation
The ability to accurately model and simulate the interactions between pedestrians and the natural environment is a matter of interest in the crowd dynamics field. A primary objective is to optimise the design of entry and exit points and thus provide safe passage in crowded venues such as schools, theatres, mosques, airports, railway stations, concert halls and football stadiums. Therefore, understanding the dynamics of crowd behaviour is important for improving the safety of crowds. People’s movements are affected by interactions with other individuals and the environment. The interactions between humans and physical objects are of particular concern in crowd movement, especially during an emergency, and require further study. Pedestrian simulation has been recognised as a tool that provides a robust framework for understanding crowd dynamics in a complex environment and for predicting crowd density during an extreme event. However, for pedestrian simulations to produce reliable numerical simulation outputs, they must be calibrated using reliable experimental data so that they can produce reasonable results. Therefore, investigating the effects of factors such as pedestrian competition levels in normal and emergency conditions, and crowd density on the behaviour of pedestrians is an important topic. In this study, we performed experiments focusing on the interaction of crowds and their surrounding physical situation; specifically, we observed how pedestrians avoid obstructions in a compound indoor environment at different speed levels (low–high) and density levels (low–high). This research aimed to study the effect of the various sizes of obstacles (1.2 m, 2.4 m, 3.6 m and 4.8 m) on human behaviour (walking and running) at particular density levels (or flow rates). Several factors that affect the movement of pedestrians around objects were studied using macro-and micro-level approaches. The results were then utilised to enhance a pedestrian simulation model developed at the University of Melbourne over the past 10 years. The outcome of this study was used to investigate the obstacles' positions, the exit locations, and the placement of obstacles around the exit to improve the movement of crowds under normal and emergency conditions.
Development of innovative non-destructive testing techniques for structural health monitoring of bridges
Bridges are a critical component of transportation network. The long-term maintenance of bridges represent around 30% of the financial value of transport infrastructure. As the bridge ages, it requires adequate maintenance against deterioration to ensure safety and serviceability. Therefore, periodic inspections for regular condition monitoring are vital for timely implementation of the maintenance strategies. The purpose of this doctoral research is to reduce the burden of data collection for bridge management systems (BMS) by developing innovative non-destructive testing (NDT) techniques, which could quickly check the bridge elements for damage identification. This study mainly focused on the bridge concrete deck and bearings. Current bridge deterioration models are based on the condition state of the individual elements of the bridge. The bridge condition data is conventionally collected through visual and physical inspection of the respective elements, which are labour intensive and expensive in term of time and money. Furthermore, these techniques are subjective and unable to detect the concealed subsurface defects, which could be more expensive to repair later on if they are unable to be detected in time. For the effective bridge condition rating, the integration of NDT techniques and the conventional visual inspections could potentially overcome the deficiencies associated with the conventional inspection methods. The largest portion of expenditures on bridge maintenance goes to the deck only. Subsurface delamination in concrete bridge decks is a widespread problem due to the corrosion of reinforcement in concrete deck. Infrared thermography (IRT) has been identified as an effective NDT technique that can remotely scan the concrete bridge members for subsurface delamination detection. However, there were still several uncertainties and deficiencies in using IRT for efficient bridge inspections. These uncertainties are highlighted and the potential advancements in knowledge are proposed in the current study. Since IRT uses the thermal profile of concrete surface to identify the subsurface damages, the surrounding environmental parameters have significant impact on IRT results. Specially, this study focused on investigating the optimum environmental conditions for IRT application on bridge deck exposed to direct solar radiations, the application of IRT for bridge members that are not exposed to direct solar radiations and the quantitative characterisation of subsurface defect using IRT. The research outcomes will contribute to the determination of optimum IRT inspection time for different defects (e.g. size and depth) under various weather conditions, in particular for defects in bridge members that have difficulties for the implementation of IRT (e.g. not directly exposed to sun radiation). The bridge bearings provide a resting surface for the superstructure. Repetitive traffic loading and harsh environmental conditions could cause significant deterioration of the mechanical stiffness of the bridge bearings, which may affect the performance of the overall bridge structural system. In this study, a remote radar-based NDT technique (IBIS-FS) was proposed for condition assessment of bridge bearings. By establishing the relationship between the fundamental frequency of the bridge superstructure and support conditions, a simplified analytical approach in conjunction with the IBIS-FS radar bridge inspection is proposed for effectively determining the current mechanical stiffness of bridge bearings.
Improving the performance of facade systems
The facade system is one of the most important components in a building. Beyond aesthetics, it provides overall protection against weather and regulates thermal performance. Failures in the facade systems can be costly. Current cladding materials have several limitations related to durability, maintenance, corrosion, high thermal conductivity, high embodied energy, flammability, breathability, expansion and contraction, water ingress, cracks, heavyweight, and so on. Any of these can lead to facade failure, considerable financial loss, and pose a safety risk to the occupant. Thus, in developing the technology and introducing new materials and facade systems, facade failures and related costs are critical and need considerations. Two potential issues within facade systems that could adversely affect performance are corrosion of steel components and fire performance of cladding. Approximately twenty percent of the world’s annual steel production is lost because of corrosion. In Australia, corrosion may have cost up to $32 billion per annum, which is more than $1500 for every person in Australia each year. Two million fires are reported in Europe annually, and 70,000 people are hospitalised in Europe each year due to severe injuries caused by fire. Forty-two percent of building fires start on the exterior wall surface and the rest are related to the items inside the facade system which also spread the fire. Based on a comprehensive literature review on corrosion of steel in combination with moisture transfer simulations using Warme Und Feuchte Instationar (WUFI) software, the risk of moisture penetration and the potential corrosion of steel in a rain screen facade system are found to be small. A detailed practical guidance to design and specify steel components against corrosion is presented. With regards to improving the cladding fire performance, this thesis focused on the development and fabrication of a new type of cladding material (3D glass fibre reinforced polymer (GFRP) nanocomposite) with improved thermal stability, fire performance, and tensile properties. 3D GFRP nanocomposite samples were fabricated with 5% and 10% of sepiolite (Sep) and sepiolite-phosphate (SepP), and 5%, 10%, and 15% of ammonium polyphosphate (APP) flame retardant. Synthesis of SepP, dispersion analysis of nanoparticles into the polymer, and fabrication process have been studied. The characterisation of materials was conducted using scanning electron microscopy (SEM), helium ion microscopy (HIM), transmission electron microscopy (TEM), thermogravimetric analysis (TGA), and X-ray diffraction analysis (XRD). The thermal stability, fire behaviour, and tensile properties of the 3D GFRP nanocomposite was studied via TGA, cone calorimeter tests, and tensile tests, respectively. TGA results showed that the optimum amount of additives that improved the thermal stability and decomposition temperature is 15% flame retardants. According to the cone test, increasing the APP flame retardant percentage (between 0-15%) remarkably improved the fire reaction properties of 3D GFRP nanocomposite regardless of the presence of Sep/SepP nanoparticles. The effects of APP flame retardant in improving the fire performance of 3D GFRP nanocomposite are remarkably higher than those of Sep fibres and SepP nanoparticles. Among Sep, SepP, and APP, APP flame retardant is better in improving thermal and fire reaction properties while Sep fibres are better in improving tensile properties of the 3D GFRP nanocomposite. Furthermore, the Sep samples showed higher ultimate strength (6%-30%) and strain (2%-39%) than SepP samples. Also, higher percentages of Sep/SepP nanoparticles (10%) showed better tensile properties than lower percentages (5%) of them. The cone calorimeter test results of the 3D GFRP nanocomposite indicated a prospective cladding that can benefit the construction industry. With more and properly instrumented full-scale facade system tests in the future and manufacturing optimisation, a more robust approach, e.g. using computational fluid dynamic modelling, need to be developed that allows the use of results from bench-tests such as those from the cone calorimeter tests to infer the fire performance of facades with alternative cladding materials in full-scale tests.
Interaction between consolidation and lubrication of articular cartilage
Articular cartilage is a biological bearing in diarthrodial joints of vertebrate animals, it has remarkable lubrication performance that outperforms the current best manufactured bearing. However, debates on cartilage lubrication theories (weeping and boosted) are still fierce. This thesis focuses on the interactive effects between cartilage consolidation and lubrication by developing a coupled cartilage contact model. The contact interface (surface roughness and polymers) was modelled by a poroelastic system on top of a cartilage tissue. Results favoured the weeping lubrication theory.
Understanding anastomosed landscapes through satellite and Indigenous eyes: A Nguku-Cooper Creek case study
The Nguku/KaRirra-Cooper Creek/Wilson River Confluence of the Kati Thanda (Lake Eyre) Basin, like many dryland water resources and associated ecologies, is increasingly under pressure from human activity and climate change. Sustainable water management requires quantitative monitoring of what is happening and when, why and how the effects are occurring, and who or what is causing change (positive or negative). This is especially difficult for such a temporary and heavily anastomosed river reach, due to its extreme natural variability, multi-year climate cycles, poorly tracked and slowly-responding ecology, sparse-instrumentation, and problematic access. Technical data about the Confluence can be sourced from measurements and satellite imagery, including 30 years of Landsat spectral data, verified during field expeditions. But technical data, with its limited duration, sampling frequency and extent, can only tell part of the story. A complementary source of information about the Confluence is the human lived experience, in the form of cultural stories that communities tell about their environment. The Indigenous Environmental Knowledge of the Wangkumara people of the KaRirra-Wilson River covers all parts of the Confluence hydrological cycle and interrelates it with cultural, historic, and ecological information. European accounts of exploration, re-naming, and settlement of the Nguku/KaRirra-Cooper Creek/Wilson River region from the 1800s to the present day are accessible through contemporaneous writings and maps and archives. Interviews with long-term residents provide information about more recent events. Archaeological studies further underpin knowledge that may hark back centuries or longer. This thesis develops a Worldview Methodology to address some of the major ethical and methodological challenges for academic researchers when accessing social and particularly Indigenous knowledge due to different systems of knowledge management and control, to promote appropriate use of Indigenous and other social knowledge in this contemporary hydrological study. The complicated Confluence landscape is systematized using Landscape Units, and its convergent/divergent drainage network is ordered by Extended Stream Order/Magnitude. Surface status is classified at pixel level using a three-way Water/Bare/Vegetated method, reflecting the significance of vegetation in tracking moisture. At feature level, waterholes are quantitatively assigned Permanent/ Intermittent/Ephemeral classifications. And at landscape level, Ribbon Plots illustrate spatial and statistical water presence over time along selected paths or transects. Using these tools to combines technical data, fieldwork, and Indigenous and social knowledge, this thesis tells a quantified cultural story of long-term water behaviour at the Nguku/KaRirra-Cooper Creek/Wilson River Confluence. It investigates three claims by current Confluence residents of flow behaviour changes as a result of construction work, plus one hydrological examination of a Wangkumara Story, quantifying how the journey of ancestral spirit Marnpi the Bronzewing Pigeon across a difficult arid landscape identifies persistent waterholes and other ecological features. The examples in this thesis show how interpretation of technical data can be improved by integration with the human lived experience via cultural stories. More broadly, the principles and methods can be applied to any multiple-channel, intermittent, or dryland system, allowing for more informed and inclusive environmental management. Ultimately, it shows social knowledge and Indigenous Environmental Knowledge are relevant to well-informed environmental management.
Spalling of concrete exposed to fire
Increasing usage of concrete materials in the building industry is related to their favourable characteristics such as durability, incombustibility, and low cost. However, fire accidents all around the world illustrate that the performance of concrete becomes vulnerable while exposed to a fire with a high heating rate such as hydrocarbon fire. This condition intensifies the occurrence of a phenomenon called “spalling”. Spalling is one of the most detrimental effects defined as a thermal instability in concrete when exposed to fire. Many experimental observations claim that when a concrete member is exposed to fire, some microcracks are generated both within and on the surface of the concrete. With the increase of temperature, the cracks develop further, and some pieces of concrete dislodge from the exposed surface, leading to exposure of steel reinforcement to high temperature in addition to the reduction of concrete cross-section result in capacity loss of concrete structural elements and even sometimes eventual failure of the entire structure. Despite the abundance of research, on fire performance of concrete members, researchers have not yet achieved a consensus agreement on the real physics of fire spalling and the relative contribution of the influencing factors on this phenomenon. Therefore, no unique and comprehensive guidelines are published for prevention of fire spalling in concrete structures. The erratic nature of spalling, inhomogeneous structure of concrete, and numerous and interdependent influencing factors results in a weak understanding of fire spalling of concrete exposed to fire. Unclear mechanism of spalling in concrete structures subjected to fire, increases the challenge for concrete designers for engineering application of high strength concrete (HSC) with a high risk of fire spalling, due to increasing usage of this material in new concrete structures such as high-rise buildings and tunnels. Based on the current research gap, this study aims to present a comprehensive study on the effect of influencing factors on fire spalling of concrete materials to achieve a clear view of the mechanism of spalling in both normal strength concrete (NSC) and high-strength concrete (HSC). The effect of cooling phase on fire performance of concrete structures is investigated through experimental and numerical analysis. The concrete structures are cooled down to room temperature after subjecting to fire and the specification and residual mechanical properties of the cooled down structure would be important for the future application of the heated concrete structure. The specification of the cooling down process including the duration of heating and the cooling rate on fire performance of concrete materials are also investigated on fire performance of concrete elements. The fire performance of concrete members is investigated in macro scale through experimental studies and microstructural analyses using Volume of Permeable Voids (VPV), Optical Microscope (OM), and Scanning Electron Microscope (SEM) tests. The achieved results indicate that higher density causes higher pore pressure inside HSC, which leads to higher porosity and generation of more cracks inside the concrete. The internal cracks reduce the stability of the HSC structure and make these types of concrete more vulnerable to fire spalling. Then, a thermo-mechanical finite element analysis (FEA) is conducted on reinforced concrete (RC) walls using Abaqus software, version 6.3. A previous experimental study is used to validate the numerical results and conduct a parametric study. The achieved results illustrate that the compressive strength of concrete is one of the main influencing factors governing the fire performance of concrete. Results indicate that unlike the NSC, thermal stress does not have a significant effect on fire spalling of HSC structures. In addition to the compressive strength, heating rate and specification of steel rebars have a considerable influence of fire performance of concrete. To develop the study of the fire spalling behaviour of concrete members, a thermo-hydromechanical FEA is then conducted to cover the effect of pore pressure in addition to thermal and mechanical stresses on fire spalling of concrete walls. Results achieved from the analysis highlight the difference in spalling mechanism of HSC and NSC. In HSC, the effect of pore pressure is more significant compared with the effect of thermal stress. On the other hand, in NSC thermal stress plays a more important role in fire spalling of concrete. Referring to the difference in the spalling mechanism of HSC and NSC, mitigation methods based on reducing the thermal gradient are necessary for NSC structures. Given the significant effect of porosity and pore pressure on the spalling of HSC members, mitigating methods to reduce or redistribute the generated pore pressure at high temperatures need to be considered in establishing a more applicable guideline for each type of concrete.
Collapse Behaviour of Prefabricated Modular Buildings Under Seismic Conditions
Modular construction is becoming increasingly popular day by day due to its’ advantages, that outweigh disadvantages to a great extent. With the continuous growth of modular construction, modular structures are moving more towards pure modular constructions which require minimal work onsite. Pure modular buildings require no external lateral force resisting systems (LFRS) which may require significant time and be more costly. Modular buildings are very different from conventional structures in terms of construction nature. A modular structure can be considered as an alternate arrangement of modules and connections. Hence, a modular building is discrete in terms of stiffness and strength distribution along with the building height, while that of a conventional building tend to be continuous. However, regardless of the apparent discrepancy between modular and conventional constructions in many aspects, modular structures are designed according to conventional building standards which fail to take inherent structural differences between modular and conventional structures into account. Furthermore, most of the connections used in modular structures resemble bolted connections which are used in conventional constructions. Bolted connections cannot provide sufficient ductility to a structure due to the brittle nature of failure involved in them. As the relative strength of the module is usually stronger than the inter-modular connections, failure is expected to occur at the connections when the rest of the building may remain in the elastic state. Hence, if this most critical connection is a bolted connection or some other connections which are not with sufficient energy-dissipating capacity, under a seismic event exceeding the design limit, the structure will result in a brittle failure initiated by the failure of connections. Moreover, since the connections form a single storey with high inelasticity concentration, failure of one connection may trigger the failure of other connections of the same storey in an unzipping manner across a storey. This may result in a plunge in the damage variation of the structure, resulting in adverse effects on the serviceability of the structure. Furthermore, an unzipping of connections across one storey may leave a colossal mass of structural component comprising a stack of modules to stand freely and overturn to collapse under a further increase in ground motion intensity. Pure modular structures are not fully realised in the current practice. However, since the current practice in modular construction is to rely on the conventional building design standards, techniques and practices, transformation to fully modular is not hindered from any of them. There are no rules or regulations in the current practise requiring external lateral force resisting systems (LFRS) for structures built in low to moderate seismic regions. Hence, if the structure without any external LFRS can still satisfy the code requirements, the structure can exist. Given the enormous time and cost savings from eliminating any external LFRS, designers might soon opt to move towards pure modular structures while still designing the structure as per conventional design standards. Some researchers have already investigated this transition. However, the risk involved in this transition, as discussed in the previous paragraph needs to be studied to come up with additional design considerations in designing pure modular structures rather than merely adhering to conventional code requirements. Even though the highlighted risk arises under higher seismic intensities (beyond design limits) which are a result of destructive intraplate earthquakes that are considered infrequent, the location of this type of future intraplate earthquakes cannot be predicted. When pure modular buildings become famous, they will start replacing the conventional structure throughout the world. Thus, the chance of such a building getting severely affected by an earthquake increase. The main aim of this study is to highlight the aforementioned potential risk involved in pure modular buildings that are built using conventionally available bolted connections when they undergo dynamic events exceeding their design intensity. A mid-rise (ten storeys) pure modular structure designed as per Australian design standards for Melbourne conditions was chosen as the prototype building to be considered in this study. Since the study aims at highlighting the risk involved in this construction nature when they are designed as per conventional building design standards without giving any special consideration, the study was conducted in comparison with a conventional steel frame analogical in physical dimensions to the chosen modular building. The study involved developing high-fidelity finite element (FE) models of the modular and conventional structures to study their feasibility and adherence to code requirements. Endurance time excitation functions (ETEF) were employed to study the seismic performance of the structure, especially under increasing seismic intensities. These ETEFs were developed as an alternative to conventional incremental dynamic analysis methods that are followed to study the seismic response of structures, more specifically, the collapse behaviour. The study is aimed at modelling the ultimate performance behaviour of the inter-modular connections between adjacent storeys as well as the near-collapse response behaviour of the building as a whole using numerical simulations along with shaking table experiments conducted on the shaker table for validation purposes. The simulations were scoped at investigating the behaviour of the building in a damaged state up to the onset of the wholesale collapse. Based on the collapse response observed in the FE models and during the experimental analysis on the shaking table, further numerical and experimental models were developed to simulate and study in-depth the collapse response of the modular structure. The study initiated with an investigation of the feasibility of the considered prototype modular and conventional steel buildings in terms of code requirements. Modal analysis, static pushover analysis and nonlinear dynamic analysis within the code specified limits proved that both the structures performed satisfactorily within the design limits. However, the modular structure was found to be failing catastrophically when the design limit exceeded, while the conventional steel frame continued to fail progressively. The discrete nature of the construction was highlighted during the analysis of prototype models, where the connections started giving up while the modules continued to remain in undamaged condition. Analysis of experimental and numerical models, together with the observations from the prototype FE models, revealed that the failure in the modular structure initiated from the failure of edge connections of the first storey. Failure of the edge connections resulted in a sudden increase in the stresses of the internal connections and a plunge in the global damage of the structure. The sudden increase in connection forces led to a failure of the internal connections of the first storey, resulting in a failure of all the connections of that storey in an unzipping manner. This resulted in the stack of modules above the first storey to be left as a free-standing block which experienced sliding, rocking, and overturning under the rest of the ground motion. The free-standing block thus formed (referred to as control model henceforth), analysed separately on the shaking table under the same ground motion stretch corresponding to the free-standing motion of the modular building superstructure, resulted in a pure rocking of the control model, without overturning. This observation was related to the absence of energy inputs (from the failure of connections just before the starting of free-standing phase), which results in initial conditions for the free-standing motion of the superstructure component, in the control model which was analysed on the shaking table. A numerical model of the control model, validated using the experimental results from the shaking table analysis was employed to verify the conclusion made on the cause of overturning with further analysis on it. Further, to support the conclusion, the ground displacement was checked against the half-width of the control model. The ground movement was way lesser than the half-width of the control model, supporting the fact that the energy released from the failure of the connections just before the free-standing motion was the sole reason for the overturning. Moreover, the validated numerical model of the control model was employed to understand and elaborate on factors exacerbating and mitigating overturning of the modular building superstructure. In contrast to the catastrophic failure that was observed in the modular structure, the conventional structure which comprised of a ductile braced frame demonstrated a failure initiation and progression through the bracing members which are not a primary component of the structure performance. Even though the prototype modular structure contained braces inside the modules, the lateral loadings that were resisted by the braces were finally transferred to the inter-modular connections. Moreover, since the braces (internal braces inside the modules) from upper and lower modules do not intersect at a single working point of the connection (due to the gap between the modules), the connections experience an unbalanced loading from the lateral loads transferred from upper and lower modules. This does not happen in the case of a conventional structure due to the continuous nature in the structure. Hence, in the conventional steel frame, the braces which have a higher ductile capacity gave up in a sequence, starting from lower storeys, moving up along the structure gradually. This sort of a distributed nature of the failure involving the failure of ductile components resulted in progressive failure of the conventional steel frame even beyond design limits. The hazardous nature of the collapse of pure modular structures, as highlighted in this study, is a critical study that has been lacking in the modular building research area. The need for this study arises explicitly due to the current practices followed in modular construction, which involves the use of conventional bolted connections as the inter-modular connections, and the use of conventional building design codes to design modular structures. As demonstrated through this study, the conventional design codes are capable of ensuring a safer performance of the conventional structures even beyond the design limits. In comparison, adherence to the same did not guarantee a safer performance of the modular structure beyond the design limits. Furthermore, this study brought out the unique nature of the failure initiation, progression and collapse involved in modular structures due to the discrete nature of its’ construction. This component of modular structures’ response has been missing in the research involved in modular structures, specifically under dynamic events. The collapse process, as studied in-depth in this research, can be utilised to improve the safety of the response of modular structures under dynamic loadings and come up with a safer pure modular system. Based on the findings from this study on the collapse response of conventional and modular structures, recommendations are presented at the end of this study for future researchers and designers to follow in designing pure modular structures. These recommendations are made by critically analysing and comparing the performance of the modular and conventional structures and by addressing what has been missing in the response of modular structures that makes its response catastrophic.
The physical internet for city logistics
The marketplace for city logistics is being shaped by the explosive growth of freight jobs and higher time sensitivity requirements due to e-commerce. There are more freight vehicles moving in urban areas now than ever before. These are primarily carried out by either professional logistics companies or crowdsourced delivery contractors. This urban freight boom generates higher levels of traffic congestion, adverse environmental impacts and other related effects that impact the overall quality of urban life. There is a need for innovative and effective solutions in city logistics by optimizing freight transportation systems within urban areas, especially to account for the fast growing market for on-demand and/or instant delivery jobs. The Physical Internet (PI) is a novel concept for freight transportation and city logistics that can potentially address this need. In the PI, freight carriers work in an open, sharing and collaborative system with interconnected operational networks. Although the conceptual framework and functional design for the PI and some of its components have been developed progressively in recent years, practical studies focused on implementation of the PI that engage different stakeholders and address their own objectives are lacking. This thesis investigates the implementation of the PI for reshaping city logistics, focused on the role and objectives of multiple stakeholders and the technological capabilities needed to bring the concept into practice. Addressing both tactical-level and operational-level issues, this thesis presents: (1) An auction-based open trading system that enables dynamic optimization of freight jobs’ allocation/reallocation amongst different stakeholders with multiple objectives for each stakeholder; this involves a multi-agent modelling approach to study the interaction of multiple self-interested agents in a complex environment; (2) The widespread parcel lockers or community stores to be used as transhipment hubs (PI-hubs) to enable flexible transhipment and interconnect multiple freight carriers; the PI-hub concept is supported by an auction-based open trading system with flexible transshipment and a solution to the transhipment-based routing problem for the interconnected network – a contribution to the Pickup and Delivery Problem with Transhipment and Time Windows; (3) A reinforcement learning (RL) enabled dynamic bidding strategy for freight carriers in the auction based freight transportation procurement platform to improve carriers’ decision making and actions in a stochastic and uncertain environment; and (4) An open trading system with consideration of multiple agents , in which the auction platform operator is considered as another self-interested agent as well. Multiple stakeholders including freight carriers, freight shippers and the auction platform operator make decentralized decisions based on achieving their own objectives where freight shippers aim to minimise freight costs, while freight carriers and the auction platform operator aim to maximise their own profit. The Deep Q Network based RL has been designed and used for multiple stakeholders to optimize their behaviour in a dynamic management environment. Multiple implementation scenarios have been simulated, and their results analysed and discussed based on a hypothetical network in Metropolitan Melbourne. In conclusion, on-line auctions, parcel lockers and RL contribute to the solution of bringing the PI concept into practice for city logistics. The two-stage on-line auction framework enables or supports an open and shared city logistics system. A parcel locker enabled interconnected network can further interconnect and optimize logistics operations. Reinforcement Learning is an intelligent method for improving stakeholders’ dynamic decisions for logistics management. Moreover, the Deep Q Network was demonstrated with better learning efficiency than some other learning approaches in the uncertain and fluctuating environment of city logistics.