Infrastructure Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Ternary spatial relations for error detection in map databases
    Majic, Ivan ( 2020)
    The quality of data in spatial databases greatly affects the performance of location-based applications that rely on maps such as emergency dispatch, land and property ownership registration, and delivery services. The negative effects of such dirty map data may range from minor inconveniences to life-threatening events. Data cleaning usually consists of two steps - error detection and error rectification. Data cleaning is a demanding and lengthy process that requires manual interventions of data experts, in particular where for complex situations involving the consistency of relationships between multiple objects. This thesis presents computational methods developed to automate the detection of errors in map databases and ease the demand for human resources in error detection. These methods are intrinsic, ie., depend only on data being analysed, without the need for a reference dataset. Two models for ternary spatial relations were developed to enable the analyses not possible with existing binary spatial relations. First, the Refined Topological relations model for Line objects (RTL) examines whether the core line object is connected to its surrounding objects on both or only one of its ends. This distinction is particularly important in networks where connectedness determines the function of the object. Second, the Ray Intersection Model (RIM) casts rays between two peripheral objects and uses the intersection sets between these rays and the core object to model its relation to peripheral objects. This provides a basis for reasoning about the core object being between peripheral objects. Both models have been computationally implemented and demonstrated on error detection tasks in OpenStreetMap. The case studies on data for the State of Victoria, Australia demonstrate that the methods developed in this research are effectively detecting errors that could so far not be automatically identified. This research contributes to automated spatial data cleaning and quality assurance, including reducing experts' workload by effectively identifying potential errors.
  • Item
    Thumbnail Image
    Analysis of the positional accuracy of linear features.
    Lawford, Geoffrey John ( 2006-09)
    Although the positional accuracy of spatial data has long been of fundamental importance in GIS, it is still largely unknown for linear features. This is compromising the ability of GIS practitioners to undertake accurate geographic analysis and hindering GIS in fulfilling its potential as a credible and reliable tool. As early as 1987 the US National Center for Geographic Information and Analysis identified accuracy as one of the key elements of successful GIS implementation. Yet two decades later, while there is a large body of geodetic literature addressing the positional accuracy of point features, there is little research addressing the positional accuracy of linear features, and still no accepted accuracy model for linear features. It has not helped that national map and data accuracy standards continue to define accuracy only in terms of “well-defined points”. This research aims to address these shortcomings by exploring the effect on linear feature positional accuracy of feature type, complexity, segment length, vertex proximity and e-scale, that is, the scale of the paper map from which the data were originally captured or to which they are customised for output. The research begins with a review of the development of map and data accuracy standards, and a review of existing research into the positional accuracy of linear features. A geographically sensible error model for linear features using point matching is then developed and a case study undertaken. Features of five types, at five e-scales, are selected from commonly used, well-regarded Australian topographic datasets, and tailored for use in the case study. Wavelet techniques are used to classify the case study features into sections based on their complexity. Then, using the error model, half a million offsets and summary statistics are generated that shed light on the relationships between positional accuracy and e-scale, feature type, complexity, segment length, and vertex proximity. Finally, auto-regressive time series modelling and moving block bootstrap analysis are used to correct the summary statistics for correlation. The main findings are as follows. First, metadata for the tested datasets significantly underestimates the positional accuracy of the data. Second, positional accuracy varies with e-scale but not, as might be expected, in a linear fashion. Third, positional accuracy varies with feature type, but not as the rules of generalisation suggest. Fourth, complex features lose accuracy faster than less complex features as e-scale is reduced. Fifth, the more complex a real-world feature, the worse its positional accuracy when mapped. Finally, accuracy mid-segment is greater than accuracy end-segment.