Infrastructure Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
    Thumbnail Image
    Information sensing, transmission and management between connected vehicles
    Hu, Wenyan ( 2023-07)
    The concept of connected vehicles is gaining momentum in the research communities thanks to the development of wireless communication technologies. Due to their ubiquity, mobility and connectivity, connected vehicles have the potential to not only revolutionise the transportation industry, but also be exploited for information sensing and transmission, as they are equipped with a wide range of sensors in addition to the wireless communication devices. Despite having the power supply and computing capacity to continuously host such sensors, the unpredictability and disorganisation of connected vehicles are presented as challenges for collecting and sharing information. Therefore, the aim of this research is to explore the role, usage and methodology of connected vehicles for information sensing, transmission and management. With regard to information sensing through connected vehicles in urban areas, this study investigates sensor network deployment, vehicle selection, and route assignment with the aim of collecting data that are appropriate for the task at hand. First, a framework is proposed to optimise the configuration of stationary sensors and opportunistic vehicular sensors in hybrid sensing for a given sensing task to improve sensing coverage. After knowing the number of vehicles needed for the sensing task, the next step is to select the proper vehicles. To optimise sensing coverage, a vehicle selection framework is proposed that integrates a model of forecasting fine-grained sensing coverage through coarse-grained information about candidate vehicles and the genetic algorithm. Sensing coverage mainly relies on the trajectories of these selected vehicles. Therefore, in order to further reduce the sensing overlap that exists between selected vehicles, an activity-based route assignment strategy integrates the sensing task requirements and the planned activities of the selected vehicles to assign routes that would not interfere with these activities. The implementation of these proposed approaches yields a win-win outcome for both the task initiators and the participants. Once a connected vehicle has sensed anomalous information, e.g., an incident, it can share the information to other connected vehicles via peer-to-peer networks without centralised infrastructures. In the context of transmitting information about ephemeral incidents in traffic, two event-driven models that transmit and manage transient information in vehicular networks are developed in this study. First, the time-geography framework is used to offer a decentralised transmission model that takes into account the transient nature of traffic incidents. Furthermore, a model for managing traffic incident information is put forth to improve traffic efficiency when traffic incidents occur by managing information regarding their range, timely updating outdated information in vehicular networks, and guiding traffic for affected vehicles. Experimental results show that these two proposed models can reduce not only invalid broadcasts but also incident-induced traffic congestion. Overall, the potential of connected vehicles for information sensing, transmission and management is explored in this thesis. The findings of this study demonstrate that the proposed approaches are capable of increasing sensing coverage so that the information sensed is appropriate for intended uses, and improving the efficiency of information sharing while requiring less broadcast and shortening the duration of invalid broadcast.
  • Item
    Thumbnail Image
    Ternary spatial relations for error detection in map databases
    Majic, Ivan ( 2020)
    The quality of data in spatial databases greatly affects the performance of location-based applications that rely on maps such as emergency dispatch, land and property ownership registration, and delivery services. The negative effects of such dirty map data may range from minor inconveniences to life-threatening events. Data cleaning usually consists of two steps - error detection and error rectification. Data cleaning is a demanding and lengthy process that requires manual interventions of data experts, in particular where for complex situations involving the consistency of relationships between multiple objects. This thesis presents computational methods developed to automate the detection of errors in map databases and ease the demand for human resources in error detection. These methods are intrinsic, ie., depend only on data being analysed, without the need for a reference dataset. Two models for ternary spatial relations were developed to enable the analyses not possible with existing binary spatial relations. First, the Refined Topological relations model for Line objects (RTL) examines whether the core line object is connected to its surrounding objects on both or only one of its ends. This distinction is particularly important in networks where connectedness determines the function of the object. Second, the Ray Intersection Model (RIM) casts rays between two peripheral objects and uses the intersection sets between these rays and the core object to model its relation to peripheral objects. This provides a basis for reasoning about the core object being between peripheral objects. Both models have been computationally implemented and demonstrated on error detection tasks in OpenStreetMap. The case studies on data for the State of Victoria, Australia demonstrate that the methods developed in this research are effectively detecting errors that could so far not be automatically identified. This research contributes to automated spatial data cleaning and quality assurance, including reducing experts' workload by effectively identifying potential errors.
  • Item
    Thumbnail Image
    A framework for micro level assessment and 3D visualisation of flood damage to a building
    AMIREBRAHIMI, SAM ( 2016)
    Flood Damage Assessment (FDA) is the key component of the flood risk management process. By highlighting the potential consequences of floods, FDA allows for an evidence-based risk management by employing optimal risk reduction measures in the community. FDA is generally performed in three main scales namely Macro, Meso and Micro. For assessing the potential flood damages at different levels, various categories of vulnerable elements (e.g. roads, people, buildings, etc.) are accounted for. Among these elements, buildings are the most notable and are considered in nearly all the current FDA methods due to their significance to the economy. In addition, with increasing risks of floods due to the climate change effects, the attention to improve the flood resilience of buildings is increasing. This leads to the need for a more profound understanding of the fluid-structure interactions and assessing the potential damages and risks to the building from floods in the early design and planning stages. Amongst the FDA methods, in contrast to the aggregated land use as the inputs of Macro and Meso models, only those Micro level assessments can provide separate analysis for the buildings. However, the current micro-level FDA models cannot account for the distinct characteristics of each building and its unique behaviour against floods. Therefore, they are associated with high uncertainties. Additionally, the current models only account for either damage from the flood loads or those as the result of floodwater contacting with water-sensitive components. This leads to incomplete outputs and further increase in the uncertainty of the results. Moreover, the existing FDA models mostly focus on the quantitative assessment of damages and do not communicate the mode/type of damage to important decision makers (e.g. designers and engineers). This inhibits the optimal selection of measures for reducing the risk to buildings. While the need of larger-scale applications are well satisfied by the existing FDA methods, the highlighted limitations hinder the use of these methods to effectively assess the damage and risks in situations where individual buildings are the focus of the analysis. To address the aforementioned limitations of the previous models, in this multidisciplinary research by adopting the Design Science Research Methodology an FDA framework was developed. This framework allows for a detailed micro-level assessment and 3D visualisation of flood damage to a building and according to its unique characteristics and behaviour against floods. The proposed processes in the framework were designed in detail according to the well-established theories in a number of related domains. Moreover, by developing a new BIM-GIS integration method, rich inputs about a building and flood parameters could be provided for the framework to effectively overcome the data input limitations of the current FDA models. The framework was realised by development of a prototype system and on the basis of the proposed guidelines. The dual evaluation of the framework using the internal validity checking as well as the use of a case study underlined the feasibility of implementation and the effective application of the framework for solving real-world problems. The benefits of the proposed framework for assessment and communication of flood damage at the building level was also highlighted to a variety of users. The framework can be employed as a complementary approach to the current FDA models for improving the resilience of the community towards floods and their adverse impacts.
  • Item
    Thumbnail Image
    Object-oriented concepts for land and geographic information systems
    Hesse, Walter ( 1991)
    This research studies the impact of Object Oriented Programming Systems (OOPS) and their underlying concepts on Land and Geographic Information Systems (LIS/GIS) in Australasia. This research considers GIS software development and conceptual data modelling aspects, and the strong relationship with proposed spatial data transfer standards. Conventional programming techniques appear to have reached their limit in coping with complex and diversified applications. "Something better' is envisaged for future software developments and data models in LIS/GIS. The relatively new object-oriented design method is reviewed and a much improved object-oriented software module for the daily maintenance operations in a Digital Cadastral Data Base (DCDB) is presented as an example. This development allows a significant improvement of the spatial accuracy of DCDB systems and its graphical user interface (GUI) represents a much better data quality visualisation tool. The choice of the right conceptual data model for GIS has a strong impact on proposed spatial data transfer standards and the way in which future Australian GIS communities will 'view' or model their real world. It has therefore been important to critically review these proposals in the Australian context.
  • Item
    Thumbnail Image
    Analysis of the positional accuracy of linear features.
    Lawford, Geoffrey John ( 2006-09)
    Although the positional accuracy of spatial data has long been of fundamental importance in GIS, it is still largely unknown for linear features. This is compromising the ability of GIS practitioners to undertake accurate geographic analysis and hindering GIS in fulfilling its potential as a credible and reliable tool. As early as 1987 the US National Center for Geographic Information and Analysis identified accuracy as one of the key elements of successful GIS implementation. Yet two decades later, while there is a large body of geodetic literature addressing the positional accuracy of point features, there is little research addressing the positional accuracy of linear features, and still no accepted accuracy model for linear features. It has not helped that national map and data accuracy standards continue to define accuracy only in terms of “well-defined points”. This research aims to address these shortcomings by exploring the effect on linear feature positional accuracy of feature type, complexity, segment length, vertex proximity and e-scale, that is, the scale of the paper map from which the data were originally captured or to which they are customised for output. The research begins with a review of the development of map and data accuracy standards, and a review of existing research into the positional accuracy of linear features. A geographically sensible error model for linear features using point matching is then developed and a case study undertaken. Features of five types, at five e-scales, are selected from commonly used, well-regarded Australian topographic datasets, and tailored for use in the case study. Wavelet techniques are used to classify the case study features into sections based on their complexity. Then, using the error model, half a million offsets and summary statistics are generated that shed light on the relationships between positional accuracy and e-scale, feature type, complexity, segment length, and vertex proximity. Finally, auto-regressive time series modelling and moving block bootstrap analysis are used to correct the summary statistics for correlation. The main findings are as follows. First, metadata for the tested datasets significantly underestimates the positional accuracy of the data. Second, positional accuracy varies with e-scale but not, as might be expected, in a linear fashion. Third, positional accuracy varies with feature type, but not as the rules of generalisation suggest. Fourth, complex features lose accuracy faster than less complex features as e-scale is reduced. Fifth, the more complex a real-world feature, the worse its positional accuracy when mapped. Finally, accuracy mid-segment is greater than accuracy end-segment.
  • Item
    Thumbnail Image
    Spatial cadastral information systems: the maintenance of digital cadastral maps
    Effenberg, Wolfgang ( 2001-05)
    The management of a cadastral system's digital spatial data has promptedconsiderable research, generally with a focus limited to the organisationmaintaining the cadastral map. The approach of viewing the maintenanceof cadastral maps as a system encompassing the entire cadastral industryhas not been comprehensively studied and documented. This approach isseen as essential to transform cadastral mapping from its currentorganisation specific isolation, into a form that is truly interoperable withthe processing of spatial cadastral information in a digital environment.This dissertation documents a research program that is essentially adefinition and an analysis and design of spatial cadastral systems withparticular emphasis on the Australian State of Victoria. The researchsubstantiates the existence of a spatial cadastral system within the overallcadastral system. A review is presented of the analysis of a number ofinternational, western spatial cadastral systems, and establishes theboundary of the spatial cadastral system. An investigation of systemmethodologies used in cadastral research and information systemsconcludes the applicability of the Zachman Framework to structure anddocument the more comprehensive analysis of spatial cadastral systems.This analysis is undertaken for the spatial cadastral systems of theAustralian State of Victoria.The impacting developments, such as enabling technology, coupled withuser requirements and issues relating to existing spatial cadastralsystems, provides the basis for the presentation of a range of solutionalternatives to manage the spatial data associated with the maintenanceof the multipurpose cadastral map in a digital and Internet enabledenvironment.
  • Item
    Thumbnail Image
    The effective implementation of GIS in local government using diffusion theory
    Dooley, P. ( 2001-06)
    Geographical Information Systems (GIS) are proving difficult to both define and effectively implement in Victorian Local Government. Current innovation diffusion theory, and emerging GIS and IS implementation theory are used to develop a framework for the implementation of either a new GIS, or for improving a currently ineffective GIS. The thesis describes a method of practically redefining GIS in the Local Government environment and then applying diffusion principles during the implementation of GIS. The first area of new investigation in the thesis is the approach to defining the GIS requirements of Local Government. In this thesis, GIS in Local Government is defined by starting with the business requirements and then letting them define the high level technical and functional requirements. This obtains a different answer from the traditional approach of assuming that current generic high level technical and functional definitions of GIS are correct, and that implementation is a selection and fine tuning process. The new approach is based mainly on the “productional perspective”; developed in recent theoretical GIS diffusion studies. The major difference is that GIS implementation in Local Government does not necessarily include the requirement for the design and construction of a specific GIS database. The GIS simply consists of graphical maps that spatially index and read existing non spatial databases within the Local Government IS environment. (For complete abstract open document)
  • Item
    Thumbnail Image
    GIS applied to administrative boundary design
    EAGLESON, SERRYN ( 2003)
    The fragmentation of administrative boundaries is a serious problem in the analysis of social, environmental and economic data. This research focuses on the development of a coordinated approach to the design of administrative boundaries that endeavours to support accurate decision making. Around the world, administrative boundaries have been structured in an uncoordinated manner, limiting data exchange and integration between organisations. The solution proposed in this research adopts the hierarchical reorganisation of administrative boundaries to enhance data integration and data exchange within the spatial data infrastructure (SDI) framework.The SDI is an initiative intended to facilitate access to complete and consistent data sets. One of the most fundamental problems restricting the objectives of the SDI is the fragmentation of data between non-coterminous boundary systems. The majority of administrative boundaries have been constructed by individual agencies to meet individual needs. Examples of the proliferation of different boundary systems include postcodes, census-collector districts, health districts and police districts. Due to the lack of coordination between boundary systems, current technologies for analysing spatial data, such as geographic information systems (GIS), are not reaching their full potential. A review of the current literature reveals that, until now, little has been done to solve this problem.The prototype developed within this research provides a new mechanism for the design of administrative boundaries. The prototype incorporates two algorithms. These are based on HSR theory and administrative-agency constraints and are implemented within the GIS environment. Such an approach is an example of the potential that is available when we link spatial information theory with the SDI framework and disciplinary knowledge.
  • Item
    Thumbnail Image
    Decision-making under spatial uncertainty
    Hope, Susannah Jayne ( 2005)
    Errors are inherent to all spatial datasets and give rise to a level of uncertainty in the final product of a geographic information system (GIS). There is growing recognition that the uncertainty associated with spatial information should be represented to users in a comprehensive and unambiguous way. However, the effects on decision-making of such representations have not been thoroughly investigated. Studies from the psychological literature indicate decision-making biases when information is uncertain. This study explores the effects of representing spatial uncertainty, through an examination of how decision-making may be affected by the introduction of thematic uncertainty and an investigation of the effects of different representations of positional uncertainty on decision-making. Two case studies are presented. The first of these considers the effects on decision-making of including thematic uncertainty information within the context of an airport siting decision task. An extremely significant tendency to select a zone for which the thematic classification is known to be of high certainty was observed. The reluctance to select a zone for which the thematic classification is of low certainty was strong enough to sometimes lead to decision-making that can only be described as irrational. The second case study investigates how decision-making may be affected by different representations of positional uncertainty within the context of maritime navigation. The same uncertainty information was presented to participants using four different display methods. Significant differences in their decisions were observed. Strong preferences for certain display methods were also exhibited, with some representations being ranked significantly higher than others. The findings from these preliminary studies demonstrate that the inclusion of uncertainty information does influence decision-making but does not necessarily lead to better decisions. A bias against information of low certainty was observed, sometimes leading to the making of irrational decisions. In addition, the form of uncertainty representation itself may affect decision-making. Further research into the effects on decision-making of representing spatial uncertainty is needed before it can be assumed that the inclusion of such information will lead to more informed decisions being made.
  • Item
    Thumbnail Image
    Automatic spatial metadata updating and enrichment
    OLFAT, HAMED ( 2013)
    Spatial information is necessary to make sound decisions at the local, regional and global levels. As a result, the amount of spatial datasets being created and exchanged between organisations or people over the networked environment is dramatically increasing. As more data and information is produced, it becomes more vital to manage and locate such resources. The role in which spatial metadata, as a summary document providing content, quality, type, creation, distribution and spatial information about a dataset, plays in the management and location of these resources has been widely acknowledged. However, the current approaches cannot effectively manage metadata creation, updating, and improvement for an ever-growing amount of data created and shared in the Spatial Data Infrastructures (SDIs) and data sharing platforms. Among the available approaches, the manual approach has been considered monotonous, time-consuming, and a labour-intensive task by organisations. Also, the existing semi-automatic metadata approaches mainly focus on specific dataset formats to extract a limited number of metadata values (e.g. bounding box). Moreover, metadata is commonly collected and created in a separate process from the spatial data lifecycle, which requires the metadata author or responsible party to put extra effort into gathering necessary data for metadata creation and updating. In addition, dataset creation and editing are detached from metadata creation and editing procedures, necessitating diligent updating practices involving at a minimum, two separate applications. Metadata and related spatial data are often stored and maintained separately using a detached data model that results in avoiding automatic and simultaneous metadata updating when a dataset is modified. The spatial data end users are also disconnected from the metadata creation and improvement process. Accordingly, this research investigated a framework and associated approaches and tools to facilitate and automate the spatial metadata creation, updating and enrichment processes. This framework consists of three complementary approaches namely ‘lifecycle-centric spatial metadata creation’, ‘automatic spatial metadata updating (synchronisation)’, and ‘automatic spatial metadata enrichment’ and a newly integrated data model for storing and exchanging spatial dataset and metadata jointly. The lifecycle-centric spatial metadata creation approach aimed to create metadata in conjunction with the spatial data lifecycle steps. The automatic spatial metadata updating (synchronisation) approach was founded on a GML-based integrated data model to update metadata affected by the dataset modification concurrent with any change to the dataset, regardless of dataset format. The automatic spatial metadata enrichment approach was also design-rooted in Web 2.0 features (tagging and folksonomy) to improve the content of spatial metadata keyword element through monitoring the end users’ interaction with the data discovery and retrieval process. The proposed integrated data model and automatic spatial metadata updating and enrichment approaches were successfully implemented and tested via prototype systems. The prototype systems then were assessed against a number of requirements identified for the spatial metadata management and automation and effectively responded to those requirements.