Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Unputdownable: how the agencies of compelling story assembly can be modelled using formalisable methods from Knowledge Representation, and in a fictional tale about seduction
    Cardier, Beth ( 2012)
    As a story unfolds, its structure can drive a reader to want to know more, in a manner that writers sometimes refer to as unputdownable. I model this behaviour so that it can be applied in two fields: creative writing and Knowledge Representation. Two forms of answer to my thesis question therefore emerged – a formalisable diagrammatic method, and an excerpt from a fictional novel, The Snakepig Dialect. The opening section of my thesis accounts for the theoretical exploration. This framework is designed to support a specific target product: a knowledge base capable of assembling fragments of information into causally coherent ‘stories.’ This assembly would be achieved through the identification of causal agents that connect the fragments, and a causal impetus that would enable the projection of possible outcomes, even when there is no precedent for them. This integration is managed by two novel features of my Dynamic Story Model. First, in order to facilitate accurate interpretation, I consider that multiple contextual situations must be arranged into relationships, just as concepts are positioned in semantic networks. Second, the associative priorities of these multiple inferences are managed by a principle that I term governance, in which the structures of some networks are able to modify and connect others. In order to extend current devices in Knowledge Representation so that these features can be represented, I draw on my own creative writing practice, as well as existing theories in Narratology, Discourse Processes, Causal Philosophy and Conceptual Change. This model of unputdownability is expressed differently in my fictional submission. The tale is set in a future Australia, in which China is the dominant culture, the weather seems to be developing intentional behaviours, and Asia's largest defence laboratory sometimes selects unusual talents to work in its invention shop. One apprentice in this institute, Lilah, falls in love with someone who seems unattainable. Instead of solving the assigned problem, she develops a formula for seduction, testing it on her beloved before she is capable of controlling her strange gift. Lilah’s seduction technique is based on a principle of governance similar to that described by my theoretical model. She learns to how to seduce by offering only fragments of information about herself, drawing her beloved into her story by provoking wonder, which eventually bends her lover’s desires. Lilah’s tale also explores the challenge of modelling a new scientific theory, and she struggles with the same difficulty of articulating an elusive phenomenon that I have in this research (but iii! with more dramatic consequences for her failures). At the same time as featuring the core concern of my research question in the plot, I have also used my model to revive this novel. By establishing terms of agency and allowing them to evolve, each section of text came to build on the next, so the reader could wonder how they might resolve. In this way, I anchored my theoretical propositions about stories in fictional practice, and gained insight into the writing process, in order to revive my ailing novel.
  • Item
    Thumbnail Image
    Toward semantic interoperability for software systems
    Lister, Kendall ( 2008)
    “In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57] In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.