Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 56
  • Item
    Thumbnail Image
    Strategic information security policy quality assessment: a multiple constituency perspective
    MAYNARD, SEAN ( 2010)
    An integral part of any information security management program is the information security policy. The purpose of an information security policy is to define the means by which organisations protect the confidentiality, integrity and availability of information and its supporting infrastructure from a range of security threats. The tenet of this thesis is that the quality of information security policy is inadequately addressed by organisations. Further, although information security policies may undergo multiple revisions as part of a process development lifecycle and, as a result, may generally improve in quality, a more explicit systematic and comprehensive process of quality improvement is required. A key assertion of this research is that a comprehensive assessment of information security policy requires the involvement of the multiple stakeholders in organisations that derive benefit from the directives of the information security policy. Therefore, this dissertation used a multiple-constituency approach to investigate how security policy quality can be addressed in organisations, given the existence of multiple stakeholders. The formal research question under investigation was: How can multiple constituency quality assessment be used to improve strategic information security policy? The primary contribution of this thesis to the Information Systems field of knowledge is the development of a model: the Strategic Information Security Policy Quality Model. This model comprises three components: a comprehensive model of quality components, a model of stakeholder involvement and a model for security policy development. The strategic information security policy quality model gives a holistic perspective to organisations to enable management of the security policy quality assessment process. This research contributes six main contributions as stated below:  This research has demonstrated that a multiple constituency approach is effective for information security policy assessment  This research has developed a set of quality components for information security policy quality assessment  This research has identified that efficiency of the security policy quality assessment process is critical for organisations  This research has formalised security policy quality assessment within policy development  This research has developed a strategic information security policy quality model  This research has identified improvements that can be made to the security policy development lifecycle The outcomes of this research contend that the security policy lifecycle can be improved by: enabling the identification of when different stakeholders should be involved, identifying those quality components that each of the different stakeholders should assess as part of the quality assessment, and showing organisations which quality components to include or to ignore based on their individual circumstances. This leads to a higher quality information security policy, and should impact positively on an organisation’s information security.
  • Item
    Thumbnail Image
    Towards interpreting informal place descriptions
    Tytyk, Igor (The University of Melbourne, 2012)
    Informal place descriptions are human-generated descriptions of locations, ex- pressed by the means of natural language in an arbitrary fashion. The aim we pur- sued in this thesis is _nding methods for better automatic interpretation of situated informal place descriptions. This work presents a framework within which we attempt to automatically classify informal place descriptions for the accuracy of the location information they contain. Having an available corpus of informal place descriptions, we identified placenames contained therein and manually annotated them for properties such as geospatial granularity and identifiability. First, we make use of the annotations and a machine learning method to conduct the classification task, and then report the accuracy scores reaching 84%. Next, we classify the descriptions again, but instead of using the manual annotations we identify the properties of placenames automatically.
  • Item
    Thumbnail Image
    Extracting characteristics of human-produced video descriptions
    Korvas, Matěj ( 2012)
    This thesis contributes to the SMILE project, aiming for video understanding. We focus on the final stage of the project where information extracted from a video should be transformed into a natural language description. Working with a corpus of human-made video descriptions, we examine it to find patterns in the descriptions. We develop a machine-learning procedure for finding statistical dependencies between linguistic features of the descriptions. Evaluating its results when run on a small sample of data, we conclude that it can be successfully extended to larger datasets. e method is generally applicable for finding dependencies in data, and extends methods for association rule mining for the option to specify distributions of features. We show future directions which, if followed, will lead to extracting a specification of common sentence patterns of video descriptions. This would allow for generating naturally sounding descriptions from the video understanding software.
  • Item
    Thumbnail Image
    Unputdownable: how the agencies of compelling story assembly can be modelled using formalisable methods from Knowledge Representation, and in a fictional tale about seduction
    Cardier, Beth ( 2012)
    As a story unfolds, its structure can drive a reader to want to know more, in a manner that writers sometimes refer to as unputdownable. I model this behaviour so that it can be applied in two fields: creative writing and Knowledge Representation. Two forms of answer to my thesis question therefore emerged – a formalisable diagrammatic method, and an excerpt from a fictional novel, The Snakepig Dialect. The opening section of my thesis accounts for the theoretical exploration. This framework is designed to support a specific target product: a knowledge base capable of assembling fragments of information into causally coherent ‘stories.’ This assembly would be achieved through the identification of causal agents that connect the fragments, and a causal impetus that would enable the projection of possible outcomes, even when there is no precedent for them. This integration is managed by two novel features of my Dynamic Story Model. First, in order to facilitate accurate interpretation, I consider that multiple contextual situations must be arranged into relationships, just as concepts are positioned in semantic networks. Second, the associative priorities of these multiple inferences are managed by a principle that I term governance, in which the structures of some networks are able to modify and connect others. In order to extend current devices in Knowledge Representation so that these features can be represented, I draw on my own creative writing practice, as well as existing theories in Narratology, Discourse Processes, Causal Philosophy and Conceptual Change. This model of unputdownability is expressed differently in my fictional submission. The tale is set in a future Australia, in which China is the dominant culture, the weather seems to be developing intentional behaviours, and Asia's largest defence laboratory sometimes selects unusual talents to work in its invention shop. One apprentice in this institute, Lilah, falls in love with someone who seems unattainable. Instead of solving the assigned problem, she develops a formula for seduction, testing it on her beloved before she is capable of controlling her strange gift. Lilah’s seduction technique is based on a principle of governance similar to that described by my theoretical model. She learns to how to seduce by offering only fragments of information about herself, drawing her beloved into her story by provoking wonder, which eventually bends her lover’s desires. Lilah’s tale also explores the challenge of modelling a new scientific theory, and she struggles with the same difficulty of articulating an elusive phenomenon that I have in this research (but iii! with more dramatic consequences for her failures). At the same time as featuring the core concern of my research question in the plot, I have also used my model to revive this novel. By establishing terms of agency and allowing them to evolve, each section of text came to build on the next, so the reader could wonder how they might resolve. In this way, I anchored my theoretical propositions about stories in fictional practice, and gained insight into the writing process, in order to revive my ailing novel.
  • Item
    Thumbnail Image
    Automatic parallelisation for Mercury
    Bone, Paul ( 2012)
    Multicore computing is ubiquitous, so programmers need to write parallel programs to take advantage of the full power of modern computer systems. However, the most popular parallel programming methods are difficult and extremely error-prone. Most such errors are intermittent, which means they may be unnoticed until after a product has been shipped; they are also often very difficult to fix. This problem has been addressed by pure declarative languages that support explicit parallelism. However, this does nothing about another problem: it is often difficult for developers to find tasks that are worth parallelising. When they can be found, it is often too easy to create too much parallelism, such that the overheads of parallel execution overwhelm the benefits gained from the parallelism. Also, when parallel tasks depend on other parallel tasks, the dependencies may restrict the amount of parallelism available. This makes it even harder for programmers to estimate the benefit of parallel execution. In this dissertation we describe our profile feedback directed automatic parallelisation system, which aims at solving this problem. We implemented this system for Mercury, a pure declarative logic programming language. We use information gathered from a profile collected from a sequential execution of a program to inform the compiler about how that program can be parallelised. Ours is, as far as we know, the first automatic parallelisation system that can estimate the parallelism available among any number of parallel tasks with any number of (non-cyclic) dependencies. This novel estimation algorithm is supplemented by an efficient exploration of the program's call graph, an analysis that calculates the cost of recursive calls (as this is not provided by the profiler), and an efficient search for the best parallelisation of N computations from among the two to the power of N minus one candidates. We found that in some cases where our system parallelised a loop, spawning off virtually all of its iterations, the resulting programs exhibited excessive memory usage and poor performance. We therefore designed and implemented a novel program transformation that fixes this problem. Our transformation allows programs to gain large improvements in performance and in several cases, almost perfect linear speedups. The transformation also allows recursive calls within the parallelised code to take advantage of tail recursion. Also presented in this dissertation are many changes that improve the performance of Mercury's parallel runtime system, as well as a proposal and partial implementation of a visualisation tool that assists developers with parallelising their programs, and helps researchers develop automatic parallelisation tools and improve the performance of the runtime system. Overall, we have attacked and solved a number of issues that are critical to making automatic parallelism a realistic option for developers.
  • Item
    Thumbnail Image
    The effects of sampling and semantic categories on large-scale supervised relation extraction
    Willy ( 2012)
    The purpose of relation extraction is to identify novel pairs of entities which are related by a pre-specified relation such as hypernym or synonym. The traditional approach to relation extraction is to building a dedicated system for a particular relation, meaning that significant effort is required to repurpose the approach to new relations. We propose a generic approach based on supervised learning, which provides a standardised process for performing relation extraction on different relations and domains. We explore the feasibility of the approach over a range of relations and corpora, focusing particularly on the development of a realistic evaluation methodology for relation extraction. In addition to this, we investigate the impact of semantic categories on extraction effectiveness.
  • Item
    Thumbnail Image
    A constraint solver and its application to machine code test generation
    Hansen, Trevor Alexander ( 2012)
    Software defects are a curse, they are so difficult to find that most software is declared finished, only later to have defects discovered. Ideally, software tools would find most, or all of those defects for us. Bit-vector and array reasoning is important to the software testing and verification tools which aim to find those defects. The bulk of this dissertation investigates how to build a faster bit-vector and array solver. The usefulness of a bit-vector and array solver depends chiefly on it being correct and efficient. Our work is practical, mostly we evaluate different simplifications that make problems easier to solve. In particular, we perform a bit-vector simplification phase that we call “theory-level bit-propagation” which propagates information throughout the problem. We describe how we tested parts of this simplification to show it is correct. We compare three approaches to solving array problems. Surprisingly, on the problems we chose, we show that the simplest approach, a reduction from arrays and bit-vectors to arrays, has the best performance. In the second part of this dissertation we study the symbolic execution of compiled software (binaries). We use the solver that we have built to perform symbolic execution of binaries. Symbolic execution is a program analysis technique that builds up expressions that describe all the possible states of a program in terms of its inputs. Symbolic execution suffers from the “path explosion”, where the number of paths through a program grows tremendously large, making analysis impractical. We show an effective approach for addressing this problem.
  • Item
    Thumbnail Image
    Combinatorial reasoning for sets, graphs and document composition
    Gange, Graeme Keith ( 2012)
    Combinatorial optimization problems require selecting the best solution from a discrete (albeit often extremely large) set of possible candidates. These problems arise in a diverse range of fields, and tend to be quite challenging. Rather than developing a specialised algorithm for each problem, however, modern approaches to solving combinatorial problems often involve transforming the problem to allow the use of existing general optimization techniques. Recent developments in constraint programming combine the expressiveness of general constraint solvers with the search reduction of conflict-directed SAT solvers, allowing real-world problems to be solved in reasonable time-frames. Unfortunately, integrating new constraints into a lazy clause generation solver is a non-trivial exercise. Rather than building a propagator for every special-purpose global constraint, it is common to express the global constraint in terms of smaller primitives. Multi-valued decision diagrams (MDDs) can compactly represent a variety of common global constraints, such as REGULAR and SEQUENCE. We present improved methods for propagating MDD-based constraints, together with explanation algorithms to allow integration into lazy clause generation solvers. While MDDs can be used to express arbitrary constraints, some constraints will produce an exponential representation. s-DNNF is an alternative representation which permits polynomial representation of a larger class of functions, while still allowing linear-time satisfiability checking. We present algorithms for integrating constraints represented as s-DNNF circuits into a lazy clause generation solver, and evaluate the algorithms on several global constraints. Automated document composition gives rise to many combinatorial problems. Historically these problems have been addressed using heuristics to give good enough solutions. However, given the modest size of many document composition tasks and recent improvements in combinatorial optimization techniques, it is possible to solve many practical instances in reasonable time. We explore the application of combinatorial optimization techniques to a variety of problems which arise in document composition and layout. First, we consider the problem of constructing optimal layouts for k-layered directed graphs. We present several models models for constructing constructing layouts with minimal crossings, and with maximum planar subgraphs; motivated by aesthetic considerations, we then consider weighted combinations of these objectives – specifically,lexicographically ordered objectives (first minimizing one, then the other). Next, we consider the problem of minimum-height table layout. We consider existing integer-programming based approaches, and present A? and lazy clause generation methods for constructing minimal height layouts. We empirically demonstrate that these methods are capable of quickly computing minimal layouts for real-world tables. We also consider the guillotine layout problem, commonly used for newspaper layout, where each region either contains a single article or is subdivided into two smaller regions by a vertical or horizontal cut. We describe algorithms for finding optimal layouts both for fixed trees of cuts and for the free guillotine layout problem, and demonstrate that these can quickly compute optimal layouts for instances with a moderate number of articles. The problems considered thus far have all been concerned with finding optimal solutions to discrete configuration problems. When constructing diagrams, it is often desirable to enforce specified constraints while permitting the user to directly manipulate the diagram. We present a modelling technique that may be used to enforce such constraints, including non-overlap of complex shapes, text containment and arbitrary separation. We demonstrate that these constraints can be solved quickly enough to allow direct manipulation.
  • Item
    Thumbnail Image
    Automated analysis of time lapse microscopy images
    KAN, ANDREY ( 2012)
    Cells are the building blocks of life, and time lapse microscopy is a powerful way to study cells. Automated video acquisition and analysis of cells opens unprecedented opportunities, ranging from building novel mathematical models supported by rich data to automated drug screening. Unfortunately, accurate and completely automated analysis of cell images is a difficult task. Therefore human intervention is often required, for example, for tuning of segmentation and tracking algorithms or correcting the results of automated analysis. In this thesis, we aim to reduce the amount of manual work required, while preserving the accuracy of analysis. Two key tasks in automated analysis are cell segmentation and tracking. Segmentation is the process of locating cell outlines in cell images, while tracking refers to establishing cell identities across subsequent video frames. One of the main challenges of automated analysis is the substantial variability in cell appearance and dynamics across different videos and even within a single video. For example, there can be a few rapidly moving cells in the beginning of a video and a large number of cells stuck in a clump by the end of the video. Such variation has resulted in a large variety of cell segmentation and tracking algorithms. There has been a large body of work on automated cell segmentation and tracking. However, many methods make specific assumptions about cell morphology or dynamics, or involve a number of parameters that a user needs to set manually. This hampers the applicability of such methods across different videos. We first develop portable cell semi-segmentation and segmentation algorithms, where portability is achieved by using a flexible cell descriptor function. We then develop a novel cell tracking algorithm that has only one parameter, and hence can be easily adopted to different videos. Furthermore, we present a parameter-free variation of the algorithm. Our evaluation on real cell videos demonstrates that our algorithms are capable of achieving accurate results and outperforming other existing methods. Even the most sophisticated cell tracking algorithms make errors. A user can be required to manually review the tracking results and correct errors. To this end, we propose a semi-automated tracking framework that is capable of identifying video frames that are likely to contain errors. The user can then look only into these frames and not into all video frames. We find that our framework can significantly reduce the amount of manual work required to review and correct tracking results. Furthermore, in different videos, the most accurate results can be obtained by different methods and different parameter settings. It is often not clear which method should be chosen for a particular video. We address this problem with a novel method for ranking cell tracking systems without manual validation. Our method is capable of ranking cell trackers according to their fitness to a particular video, without the need for manual collection of the ground truth tracks. We simulate practical tracking scenarios and confirm the feasibility of our method. Finally, as an example of a biological assay, we consider evaluating the locomotion of Plasmodium parasites (that cause malaria) with application to automated anti-malaria drug screening. We track live parasites in a matrigel medium and develop a numerical description of parasite tracks. Our experiments show that this description captures changes in the locomotion in response to treatment with the toxin Cytochalasin D. Therefore our description can form a basis for automated drug screening, where various treatments are applied to different cell populations by a robot, and the resulting tracks are evaluated quantitatively. In summary, our thesis makes six major contributions highlighted above. These contributions can reduce the amount of manual work in cell image analysis, while achieving highly accurate results.
  • Item
    Thumbnail Image
    Exploring information system (IS) driven organisational change: the interplay between IS and organisational work routines
    PENG, FEI ( 2012)
    Understanding the way an Information system (IS) changes an organisation is challenging. A large number of studies have been conducted to understand the secret behind obtaining desirable results from IS projects, however little consensus has been established in the field to date. A review of the literature shows that once introduced, information systems (IS) create a reciprocal, emergent and dynamic interplay with the organisation. This complex interplay in turn drives organisational change. In the currently literature, this interplay between the information system and the organisation has not been well addressed especially in the field of healthcare. There is a lack of detailed understanding about the process of organisational transformation after the introduction of an IS and there is a lack of understanding about the changes in the IS itself. This lack of understanding contributes to the difficulty in successfully managing organisational change projects that are implemented by introducing IS. A longitudinal in-depth single case study was conducted over a period of nine months in a major emergency and trauma centre to explore these issues. This study was designed to investigate the development of organisational change after the introduction of an information system using work routine as the study focus. Over 400 hours of observations and 20 interviews were carried out during the investigation period. The study findings were investigated and triangulated using both qualitative and quantitative methods. This research developed a model of IS and work routine change to explain the interplay that drives organisational change. This study has effectively broken down the work routine construct and closely examined the detailed changes that occurred over time by adopting Giddens’ structuration theory and Feldman’s conceptualisation of work routine. This model demonstrates that the interaction between IS and organisations is a cross-level phenomenon spanning across the individual, group and organisational level and involving three distinctive phases of development, the resource evaluation phase, the experimental adaptation phase and the structural realignment phase. This research offers a richer and deeper understanding of the intricate interplay between IS and the organisational work routine. The detailed investigation provides a systematic and practical way of managing IS projects for better achievement of intended business goals.