Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 27
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    A multistage computer model of picture scanning, image understanding, and environment analysis, guided by research into human and primate visual systems
    Rogers, T. J. (University of Melbourne, Faculty of Engineering,, 1983)
    This paper describes the design and some testing of a computational model of picture scanning and image understanding (TRIPS), which outputs a description of the scene in a subset of English. This model can be extended to control the analysis of a three dimensional environment and changes of the viewing system's position within that environment. The model design is guided by a summary of neurophysiological, psychological, and psychophysical observations and theories concerning visual perception in humans and other primates, with an emphasis on eye movements. These results indicate that lower level visual information is processed in parallel in a spatial representation while higher level processing is mostly sequential, using a symbolic, post iconic, representation. The emphasis in this paper is on simulating the cognitive aspects of eye movement control and the higher level post iconic representation of images. The design incorporates several subsystems. The highest level control module is described in detail, since computer models Of eye movement which use cognitively guided saccade selection are not common. For other modules, the interfaces with the whole system and the internal computations required are out lined, as existing image processing techniques can be applied to perform these computations. Control is based on a production . system, which uses an "hypothesising" system - a simplified probabilistic associative production system - to determine which production to apply. A framework for an image analysis language (TRIAL), based on "THINGS". and "RELATIONS" is presented, with algorithms described in detail for the matching procedure and the transformations of size, orientation, position, and so On. TRIAL expressions in the productions are used to generate "cognitive expectations" concerning future eye movements and their effects which can influence the control of the system. Models of low level feature extraction, with parallel processing of iconic representations have been common in computer vision literature, as are techniques for image manipulation and syntactic and statistical analysis� Parallel and serial systems have also been extensively investigated. This model proposes an integration Of these approaches using each technique in the domain to which it is suited. The model proposed for the inferotemporal cortex could be also suitable as a model of the posterior parietal cortex. A restricted version of the picture scanning model (TRIPS) has been implemented, which demonstrates the consistency of the model and also exhibits some behavioural characteristics qualitatively similar to primate visual systems. A TRIAL language is shown to be a useful representation for the analysis and description of scenes. key words: simulation, eye movements, computer vision systems, inferotemporal, parietal, image representation, TRIPS, TRIAL.
  • Item
    Thumbnail Image
    Rapid de novo methods for genome analysis
    HALL, ROSS STEPHEN ( 2013)
    Next generation sequencing methodologies have resulted in an exponential increase in the amount of genomic sequence data available to researchers. Valuable tools in the initial analysis of such data for novel features are de novo techniques - methods which employ a minimum of comparative sequence information from known genomes. In this thesis I describe two heuristic algorithms for the rapid de novo analysis of genomic sequence data. The first algorithm employs a multiple Fast Fourier Transform, mapped to two dimensional spaces. The resulting bitmap clearly illustrates periodic features of a genome including coding density. The compact representation allows mega base scales of genomic data to be rendered in a single bitmap. The second algorithm RTASSS, (RNA Template Assisted Secondary Structure Search) predicts potential members of RNA gene families that are related by similar secondary structure, but not necessarily conserved sequence. RTASSS has the ability to find candidate structures similar to a given template structure without the use of sequence homology. Both algorithms have a linear complexity.
  • Item
    Thumbnail Image
    Automatic instant messaging dialogue using statistical models and dialogue acts
    Ivanovic, Edward ( 2008)
    Instant messaging dialogue is used for communication by hundreds of millions of people worldwide, but has received relatively little attention in computational linguistics. We describe methods aimed at providing a shallow interpretation of messages sent via instant messaging. This is done by assigning labels known as dialogue acts to utterances within messages. Since messages may contain more than one utterance, we explore automatic message segmentation using combinations of parse trees and various statistical models to achieve high accuracy for both classification and segmentation tasks. Finally, we gauge the immediate usefulness of dialogue acts in conversation management by presenting a dialogue simulation program that uses dialogue acts to predict utterances during a conversation. The predictions are evaluated via qualitative means where we obtain very encouraging results.
  • Item
    Thumbnail Image
    The logic of bunched implications: a memoir
    Horsfall, Benjamin Robert ( 2006)
    This is a study of the semantics and proof theory of the logic of bunched implications (BI), which is promoted as a logic of (computational) resources, and is a foundational component of separation logic, an approach to program analysis. BI combines an additive, or intuitionistic, fragment with a multiplicative fragment. The additive fragment has full use of the structural rules of weakening and contraction, and the multiplicative fragment has none. Thus it contains two conjunctive and two implicative connectives. At various points, we illustrate a resource view of BI based upon the Kripke resource semantics. Our first original contribution is the formulation of a proof system for BI in the newly developed proof-theoretical formalism of the calculus of structures. The calculus of structures is distinguished by its employment of deep inference, but we already see deep inference in a limited form in the established proof theory for BI. We show that our system is sound with respect to the elementary Kripke resource semantics for BI, and complete with respect to a formulation of the partially-defined Kripke resource semantics. Our second contribution is the development from a semantic standpoint of preliminary ideas for a hybrid logic of bunched implications (HBI). We give a Kripke semantics for HBI in which nominal propositional atoms can be seen as names for resources, rather than as names for locations, as is the case with related proposals for BI-Loc and for intuitionistic hybrid logic.
  • Item
    Thumbnail Image
    Local search methods for constraint problems
    Muhammad, Muhammad Rafiq Bin ( 2008-02)
    This thesis investigates the use of local search methods in solving constraint problems. Such problems are very hard in general and local search offers a useful and successful alternative to existing techniques. The focus of the thesis is to analyze the techniques of invariants used in local search. The use of invariants have recently become the cornerstone of local search technology as they provide a declarative way to specify incremental algorithms. We have produced a series of program libraries in C++ known as the One-Way-Solver. The One-Way-Solver includes the implementation of incremental data structures and is a useful tool for the implementation of local search. The One-Way-Solver is applied to two challenging constraint problems, the Boolean Satisfiability Testing (SAT) and university course timetabling problems.
  • Item
    Thumbnail Image
    Using minimal recursion semantics in Japanese question answering
    DRIDAN, REBECCA ( 2006-09)
    Question answering is a research field with the aim of providing answers to a user’s question, phrased in natural language. In this thesis I explore some techniques used in question answering, working towards the twin goals of using deep linguistic knowledge robustly as well as using language-independent methods wherever possible. While the ultimate aim is cross-language question answering, in this research experiments are conducted over Japanese data, concentrating on factoid questions. The two main focus areas, identified as the two tasks most likely to benefit from linguistic knowledge, are question classification and answer extraction. In question classification, I investigate the issues involved in the two common methods used for this task—pattern matching and machine learning. I find that even with a small amount of training data (2000 questions), machine learning achieves better classification accuracy than pattern matching with much less effort. The other issue I explore in question classification is the classification accuracy possible with named entity taxonomies of different sizes and shapes. Results demonstrate that, although the accuracy decreases as the taxonomy size increases, the ability to use soft decision making techniques as well as high accuracies achieved in certain classes make larger, hierarchical taxonomies a viable option. For answer extraction, I use Robust Minimal Recursion Semantics (RMRS) as a sentence representation to determine similarity between questions and answers, and then use this similarity score, along with other information discovered during comparison, to score and rank answer candidates. Results were slightly disappointing, but close examination showed that 40% of errors were due to answer candidate extraction, and the scoring algorithm worked very well. Interestingly, despite the lower accuracy achieved during question classification, the larger named entity taxonomies allowed much better accuracy in answer extraction than the smaller taxonomies.
  • Item
    Thumbnail Image
    Improving the efficiency and capabilities of document structuring
    MARSHALL, ROBERT ( 2007)
    Natural language generation (NLG), the problem of creating human-readable documents by computer, is one of the major fields of research in computational linguistics The task of creating a document is extremely common in many fields of activity. Accordingly, there are many potential applications for NLG - almost any document creation task could potentially be automated by an NLG system. Advanced forms of NLG could also be used to generate a document in multiple languages, or as an output interface for other programs, which might ordinarily produce a less-manageable collection of data. They may also be able to create documents tailored to the needs of individual users. This thesis deals with document structure, a recent theory which describes those aspects of a document’s layout which affect its meaning. As well as its theoretical interest, it is a useful intermediate representation in the process of NLG. There is a well-defined process for generating a document structure using constraint programming. We show how this process can be made considerably more efficient. This in turn allows us to extend the document structuring task to allow for summarisation and finer control of the document layout. This thesis is organised as follows. Firstly, we review the necessary background material in both natural language processing and constraint programming.
  • Item
    Thumbnail Image
    On designing a mobile robot for RoboCup
    Peel, Andrew Gregory ( 2006-03)
    The Roobots are a robot soccer team which participated in the RoboCup small-sized robot league competition in the years 2000, 2001 and 2002, when they finished in fourth place. This thesis describes the design of the robots in the 2002 team. Design issues for mobile robots in the RoboCup small-sized robot league are reviewed. The design decisions are presented. Finally, some lessons learnt for system design and project management from the three years of competition are presented.