Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Towards interpreting informal place descriptions
    Tytyk, Igor (The University of Melbourne, 2012)
    Informal place descriptions are human-generated descriptions of locations, ex- pressed by the means of natural language in an arbitrary fashion. The aim we pur- sued in this thesis is _nding methods for better automatic interpretation of situated informal place descriptions. This work presents a framework within which we attempt to automatically classify informal place descriptions for the accuracy of the location information they contain. Having an available corpus of informal place descriptions, we identified placenames contained therein and manually annotated them for properties such as geospatial granularity and identifiability. First, we make use of the annotations and a machine learning method to conduct the classification task, and then report the accuracy scores reaching 84%. Next, we classify the descriptions again, but instead of using the manual annotations we identify the properties of placenames automatically.
  • Item
    Thumbnail Image
    Extracting characteristics of human-produced video descriptions
    Korvas, Matěj ( 2012)
    This thesis contributes to the SMILE project, aiming for video understanding. We focus on the final stage of the project where information extracted from a video should be transformed into a natural language description. Working with a corpus of human-made video descriptions, we examine it to find patterns in the descriptions. We develop a machine-learning procedure for finding statistical dependencies between linguistic features of the descriptions. Evaluating its results when run on a small sample of data, we conclude that it can be successfully extended to larger datasets. e method is generally applicable for finding dependencies in data, and extends methods for association rule mining for the option to specify distributions of features. We show future directions which, if followed, will lead to extracting a specification of common sentence patterns of video descriptions. This would allow for generating naturally sounding descriptions from the video understanding software.