Infrastructure Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 9 of 9
  • Item
    Thumbnail Image
    A framework for micro level assessment and 3D visualisation of flood damage to a building
    AMIREBRAHIMI, SAM ( 2016)
    Flood Damage Assessment (FDA) is the key component of the flood risk management process. By highlighting the potential consequences of floods, FDA allows for an evidence-based risk management by employing optimal risk reduction measures in the community. FDA is generally performed in three main scales namely Macro, Meso and Micro. For assessing the potential flood damages at different levels, various categories of vulnerable elements (e.g. roads, people, buildings, etc.) are accounted for. Among these elements, buildings are the most notable and are considered in nearly all the current FDA methods due to their significance to the economy. In addition, with increasing risks of floods due to the climate change effects, the attention to improve the flood resilience of buildings is increasing. This leads to the need for a more profound understanding of the fluid-structure interactions and assessing the potential damages and risks to the building from floods in the early design and planning stages. Amongst the FDA methods, in contrast to the aggregated land use as the inputs of Macro and Meso models, only those Micro level assessments can provide separate analysis for the buildings. However, the current micro-level FDA models cannot account for the distinct characteristics of each building and its unique behaviour against floods. Therefore, they are associated with high uncertainties. Additionally, the current models only account for either damage from the flood loads or those as the result of floodwater contacting with water-sensitive components. This leads to incomplete outputs and further increase in the uncertainty of the results. Moreover, the existing FDA models mostly focus on the quantitative assessment of damages and do not communicate the mode/type of damage to important decision makers (e.g. designers and engineers). This inhibits the optimal selection of measures for reducing the risk to buildings. While the need of larger-scale applications are well satisfied by the existing FDA methods, the highlighted limitations hinder the use of these methods to effectively assess the damage and risks in situations where individual buildings are the focus of the analysis. To address the aforementioned limitations of the previous models, in this multidisciplinary research by adopting the Design Science Research Methodology an FDA framework was developed. This framework allows for a detailed micro-level assessment and 3D visualisation of flood damage to a building and according to its unique characteristics and behaviour against floods. The proposed processes in the framework were designed in detail according to the well-established theories in a number of related domains. Moreover, by developing a new BIM-GIS integration method, rich inputs about a building and flood parameters could be provided for the framework to effectively overcome the data input limitations of the current FDA models. The framework was realised by development of a prototype system and on the basis of the proposed guidelines. The dual evaluation of the framework using the internal validity checking as well as the use of a case study underlined the feasibility of implementation and the effective application of the framework for solving real-world problems. The benefits of the proposed framework for assessment and communication of flood damage at the building level was also highlighted to a variety of users. The framework can be employed as a complementary approach to the current FDA models for improving the resilience of the community towards floods and their adverse impacts.
  • Item
    Thumbnail Image
    Analysis of the positional accuracy of linear features.
    Lawford, Geoffrey John ( 2006-09)
    Although the positional accuracy of spatial data has long been of fundamental importance in GIS, it is still largely unknown for linear features. This is compromising the ability of GIS practitioners to undertake accurate geographic analysis and hindering GIS in fulfilling its potential as a credible and reliable tool. As early as 1987 the US National Center for Geographic Information and Analysis identified accuracy as one of the key elements of successful GIS implementation. Yet two decades later, while there is a large body of geodetic literature addressing the positional accuracy of point features, there is little research addressing the positional accuracy of linear features, and still no accepted accuracy model for linear features. It has not helped that national map and data accuracy standards continue to define accuracy only in terms of “well-defined points”. This research aims to address these shortcomings by exploring the effect on linear feature positional accuracy of feature type, complexity, segment length, vertex proximity and e-scale, that is, the scale of the paper map from which the data were originally captured or to which they are customised for output. The research begins with a review of the development of map and data accuracy standards, and a review of existing research into the positional accuracy of linear features. A geographically sensible error model for linear features using point matching is then developed and a case study undertaken. Features of five types, at five e-scales, are selected from commonly used, well-regarded Australian topographic datasets, and tailored for use in the case study. Wavelet techniques are used to classify the case study features into sections based on their complexity. Then, using the error model, half a million offsets and summary statistics are generated that shed light on the relationships between positional accuracy and e-scale, feature type, complexity, segment length, and vertex proximity. Finally, auto-regressive time series modelling and moving block bootstrap analysis are used to correct the summary statistics for correlation. The main findings are as follows. First, metadata for the tested datasets significantly underestimates the positional accuracy of the data. Second, positional accuracy varies with e-scale but not, as might be expected, in a linear fashion. Third, positional accuracy varies with feature type, but not as the rules of generalisation suggest. Fourth, complex features lose accuracy faster than less complex features as e-scale is reduced. Fifth, the more complex a real-world feature, the worse its positional accuracy when mapped. Finally, accuracy mid-segment is greater than accuracy end-segment.
  • Item
    Thumbnail Image
    Spatial cadastral information systems: the maintenance of digital cadastral maps
    Effenberg, Wolfgang ( 2001-05)
    The management of a cadastral system's digital spatial data has promptedconsiderable research, generally with a focus limited to the organisationmaintaining the cadastral map. The approach of viewing the maintenanceof cadastral maps as a system encompassing the entire cadastral industryhas not been comprehensively studied and documented. This approach isseen as essential to transform cadastral mapping from its currentorganisation specific isolation, into a form that is truly interoperable withthe processing of spatial cadastral information in a digital environment.This dissertation documents a research program that is essentially adefinition and an analysis and design of spatial cadastral systems withparticular emphasis on the Australian State of Victoria. The researchsubstantiates the existence of a spatial cadastral system within the overallcadastral system. A review is presented of the analysis of a number ofinternational, western spatial cadastral systems, and establishes theboundary of the spatial cadastral system. An investigation of systemmethodologies used in cadastral research and information systemsconcludes the applicability of the Zachman Framework to structure anddocument the more comprehensive analysis of spatial cadastral systems.This analysis is undertaken for the spatial cadastral systems of theAustralian State of Victoria.The impacting developments, such as enabling technology, coupled withuser requirements and issues relating to existing spatial cadastralsystems, provides the basis for the presentation of a range of solutionalternatives to manage the spatial data associated with the maintenanceof the multipurpose cadastral map in a digital and Internet enabledenvironment.
  • Item
    Thumbnail Image
    The effective implementation of GIS in local government using diffusion theory
    Dooley, P. ( 2001-06)
    Geographical Information Systems (GIS) are proving difficult to both define and effectively implement in Victorian Local Government. Current innovation diffusion theory, and emerging GIS and IS implementation theory are used to develop a framework for the implementation of either a new GIS, or for improving a currently ineffective GIS. The thesis describes a method of practically redefining GIS in the Local Government environment and then applying diffusion principles during the implementation of GIS. The first area of new investigation in the thesis is the approach to defining the GIS requirements of Local Government. In this thesis, GIS in Local Government is defined by starting with the business requirements and then letting them define the high level technical and functional requirements. This obtains a different answer from the traditional approach of assuming that current generic high level technical and functional definitions of GIS are correct, and that implementation is a selection and fine tuning process. The new approach is based mainly on the “productional perspective”; developed in recent theoretical GIS diffusion studies. The major difference is that GIS implementation in Local Government does not necessarily include the requirement for the design and construction of a specific GIS database. The GIS simply consists of graphical maps that spatially index and read existing non spatial databases within the Local Government IS environment. (For complete abstract open document)
  • Item
    Thumbnail Image
    GIS applied to administrative boundary design
    EAGLESON, SERRYN ( 2003)
    The fragmentation of administrative boundaries is a serious problem in the analysis of social, environmental and economic data. This research focuses on the development of a coordinated approach to the design of administrative boundaries that endeavours to support accurate decision making. Around the world, administrative boundaries have been structured in an uncoordinated manner, limiting data exchange and integration between organisations. The solution proposed in this research adopts the hierarchical reorganisation of administrative boundaries to enhance data integration and data exchange within the spatial data infrastructure (SDI) framework.The SDI is an initiative intended to facilitate access to complete and consistent data sets. One of the most fundamental problems restricting the objectives of the SDI is the fragmentation of data between non-coterminous boundary systems. The majority of administrative boundaries have been constructed by individual agencies to meet individual needs. Examples of the proliferation of different boundary systems include postcodes, census-collector districts, health districts and police districts. Due to the lack of coordination between boundary systems, current technologies for analysing spatial data, such as geographic information systems (GIS), are not reaching their full potential. A review of the current literature reveals that, until now, little has been done to solve this problem.The prototype developed within this research provides a new mechanism for the design of administrative boundaries. The prototype incorporates two algorithms. These are based on HSR theory and administrative-agency constraints and are implemented within the GIS environment. Such an approach is an example of the potential that is available when we link spatial information theory with the SDI framework and disciplinary knowledge.
  • Item
    Thumbnail Image
    Decision-making under spatial uncertainty
    Hope, Susannah Jayne ( 2005)
    Errors are inherent to all spatial datasets and give rise to a level of uncertainty in the final product of a geographic information system (GIS). There is growing recognition that the uncertainty associated with spatial information should be represented to users in a comprehensive and unambiguous way. However, the effects on decision-making of such representations have not been thoroughly investigated. Studies from the psychological literature indicate decision-making biases when information is uncertain. This study explores the effects of representing spatial uncertainty, through an examination of how decision-making may be affected by the introduction of thematic uncertainty and an investigation of the effects of different representations of positional uncertainty on decision-making. Two case studies are presented. The first of these considers the effects on decision-making of including thematic uncertainty information within the context of an airport siting decision task. An extremely significant tendency to select a zone for which the thematic classification is known to be of high certainty was observed. The reluctance to select a zone for which the thematic classification is of low certainty was strong enough to sometimes lead to decision-making that can only be described as irrational. The second case study investigates how decision-making may be affected by different representations of positional uncertainty within the context of maritime navigation. The same uncertainty information was presented to participants using four different display methods. Significant differences in their decisions were observed. Strong preferences for certain display methods were also exhibited, with some representations being ranked significantly higher than others. The findings from these preliminary studies demonstrate that the inclusion of uncertainty information does influence decision-making but does not necessarily lead to better decisions. A bias against information of low certainty was observed, sometimes leading to the making of irrational decisions. In addition, the form of uncertainty representation itself may affect decision-making. Further research into the effects on decision-making of representing spatial uncertainty is needed before it can be assumed that the inclusion of such information will lead to more informed decisions being made.
  • Item
    Thumbnail Image
    Automatic spatial metadata updating and enrichment
    OLFAT, HAMED ( 2013)
    Spatial information is necessary to make sound decisions at the local, regional and global levels. As a result, the amount of spatial datasets being created and exchanged between organisations or people over the networked environment is dramatically increasing. As more data and information is produced, it becomes more vital to manage and locate such resources. The role in which spatial metadata, as a summary document providing content, quality, type, creation, distribution and spatial information about a dataset, plays in the management and location of these resources has been widely acknowledged. However, the current approaches cannot effectively manage metadata creation, updating, and improvement for an ever-growing amount of data created and shared in the Spatial Data Infrastructures (SDIs) and data sharing platforms. Among the available approaches, the manual approach has been considered monotonous, time-consuming, and a labour-intensive task by organisations. Also, the existing semi-automatic metadata approaches mainly focus on specific dataset formats to extract a limited number of metadata values (e.g. bounding box). Moreover, metadata is commonly collected and created in a separate process from the spatial data lifecycle, which requires the metadata author or responsible party to put extra effort into gathering necessary data for metadata creation and updating. In addition, dataset creation and editing are detached from metadata creation and editing procedures, necessitating diligent updating practices involving at a minimum, two separate applications. Metadata and related spatial data are often stored and maintained separately using a detached data model that results in avoiding automatic and simultaneous metadata updating when a dataset is modified. The spatial data end users are also disconnected from the metadata creation and improvement process. Accordingly, this research investigated a framework and associated approaches and tools to facilitate and automate the spatial metadata creation, updating and enrichment processes. This framework consists of three complementary approaches namely ‘lifecycle-centric spatial metadata creation’, ‘automatic spatial metadata updating (synchronisation)’, and ‘automatic spatial metadata enrichment’ and a newly integrated data model for storing and exchanging spatial dataset and metadata jointly. The lifecycle-centric spatial metadata creation approach aimed to create metadata in conjunction with the spatial data lifecycle steps. The automatic spatial metadata updating (synchronisation) approach was founded on a GML-based integrated data model to update metadata affected by the dataset modification concurrent with any change to the dataset, regardless of dataset format. The automatic spatial metadata enrichment approach was also design-rooted in Web 2.0 features (tagging and folksonomy) to improve the content of spatial metadata keyword element through monitoring the end users’ interaction with the data discovery and retrieval process. The proposed integrated data model and automatic spatial metadata updating and enrichment approaches were successfully implemented and tested via prototype systems. The prototype systems then were assessed against a number of requirements identified for the spatial metadata management and automation and effectively responded to those requirements.
  • Item
    Thumbnail Image
    Development of a knowledge base for low-volume roads using a geographic information system
    Sun, Ran ( 2011)
    Currently each State Jurisdiction holds significant expenditure and road section activity data which are in varying formats and classifications. A Knowledge Management technique can extract differing data sets across multi-criteria in order to build up comprehensive data sets. Potentially this sound knowledge base can make more precise analysis and strategic decisions for low-volume roads. Geographic Information System (GIS) has been used in this research as the platform of this knowledge base due to its powerful data integration ability. One GIS software (TransCAD) has been chosen to combine all the existing data and also to estimate the traffic data as the available data is insufficient on building up such a knowledge base. Using traffic assignment and matrix estimation techniques, traffic volume data can be estimated from limited data source to produce a more comprehensive database. Nevertheless, not all the traffic assignment techniques have been tested and matrix estimation result cannot be validated until real data are acquired. It provides an approach when developing such a knowledge base, and with more input, results can be improved and a sound knowledge base is ready to be built.
  • Item
    Thumbnail Image
    Detecting change in an environment with wireless sensor networks
    SHI, MINGZHENG ( 2010)
    This thesis is motivated by a new observation tool, called wireless sensor network (WSN). WSN, as any other observation method, must cope with spatial, temporal, and thematic granularities of observations. This thesis investigates how WSNs can efficiently detect different types of changes of spatial phenomena, or objects. Two types of changes, i.e., gradual and abrupt spatial changes, are distinguished based on the quantity of change in particular, WSN inherent granularities. A new spatiotemporal data model is proposed for the representation of dynamic spatial objects in WSNs. An algorithm is then designed for WSNs to detect both gradual and abrupt spatial changes with different spatial, temporal, and thematic granularities. The efficiency of the algorithm is proven by qualitative and quantitative evaluation in WSN simulations. This thesis also proposes a new network structure, called multi-granularity sensor network, for different granularities of observations.