Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 251
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    A multistage computer model of picture scanning, image understanding, and environment analysis, guided by research into human and primate visual systems
    Rogers, T. J. (University of Melbourne, Faculty of Engineering,, 1983)
    This paper describes the design and some testing of a computational model of picture scanning and image understanding (TRIPS), which outputs a description of the scene in a subset of English. This model can be extended to control the analysis of a three dimensional environment and changes of the viewing system's position within that environment. The model design is guided by a summary of neurophysiological, psychological, and psychophysical observations and theories concerning visual perception in humans and other primates, with an emphasis on eye movements. These results indicate that lower level visual information is processed in parallel in a spatial representation while higher level processing is mostly sequential, using a symbolic, post iconic, representation. The emphasis in this paper is on simulating the cognitive aspects of eye movement control and the higher level post iconic representation of images. The design incorporates several subsystems. The highest level control module is described in detail, since computer models Of eye movement which use cognitively guided saccade selection are not common. For other modules, the interfaces with the whole system and the internal computations required are out lined, as existing image processing techniques can be applied to perform these computations. Control is based on a production . system, which uses an "hypothesising" system - a simplified probabilistic associative production system - to determine which production to apply. A framework for an image analysis language (TRIAL), based on "THINGS". and "RELATIONS" is presented, with algorithms described in detail for the matching procedure and the transformations of size, orientation, position, and so On. TRIAL expressions in the productions are used to generate "cognitive expectations" concerning future eye movements and their effects which can influence the control of the system. Models of low level feature extraction, with parallel processing of iconic representations have been common in computer vision literature, as are techniques for image manipulation and syntactic and statistical analysis� Parallel and serial systems have also been extensively investigated. This model proposes an integration Of these approaches using each technique in the domain to which it is suited. The model proposed for the inferotemporal cortex could be also suitable as a model of the posterior parietal cortex. A restricted version of the picture scanning model (TRIPS) has been implemented, which demonstrates the consistency of the model and also exhibits some behavioural characteristics qualitatively similar to primate visual systems. A TRIAL language is shown to be a useful representation for the analysis and description of scenes. key words: simulation, eye movements, computer vision systems, inferotemporal, parietal, image representation, TRIPS, TRIAL.
  • Item
    Thumbnail Image
    Safe acceptance of zero-confirmation transactions in Bitcoin
    Yang, Renlord ( 2016)
    Acceptance of zero confirmation transactions in Bitcoin is inherently unsafe due to the lack of consistency in states between nodes in the network. As a consequence of this, Bitcoin users must endure a mean wait time of 10 minutes to accept confirmed transactions. Even so, due to the possibility of forks in the Blockchain, users who may want to avoid invalidation risks completely may have to wait up to 6 confirmations, which in turn results in a 60 minute mean wait time. This is untenable and remains a deterrent to the utility of Bitcoin as a payment method for merchants. Our work seeks to address this problem by introducing a novel insurance scheme to guarantee a deterministic outcome for transaction recipients. The proposed insurance scheme utilizes standard Bitcoin scripts and transactions to produce inter-dependent transactions which will be triggered or invalidated based on the occurance of potential doublespend attacks. A library to setup the insurance scheme and a test suite was implemented for anyone who may be interested in using this scheme to setup a fully anonymous and trustless insurance scheme. Based on our test in Testnet, our insurance scheme was successful at defending against 10 out of 10 doublespend attacks.
  • Item
    Thumbnail Image
    Automatic caloric expenditure estimation with smartphone's built-in sensors
    Cabello Wilson, Nestor Stiven ( 2016)
    Fitness-tracking systems are technologies commonly used to enhance peoples' lifestyles. Feedback, usability, and ease of acquisition are fundamental to achieving the good physical condition goal. Users need constant motivation as a way to keep their interest in the fitness system and consequently, continue on a healthy lifestyle track. However, although feedback is increasingly being incorporated in many fitness-tracking systems, usability and ease of acquisition are remaining shortcomings that need to be enhanced. Features such as automatic activity identification, low-energy consumption, simplicity and goals-achieved notifications provide a good user experience. Nevertheless, most of these functions require the acquisition of a relatively expensive fitness-tracking device. Smartphones provide a partial solution by allowing users an easy access to multiple fitness applications, which reduce the need for purchasing another gadget. Nonetheless, improvements in the user experience are still necessary. In the other hand, wearables devices satisfy the usability, however, the cost of their acquisition represents an impediment to some users. The system proposed in this research aims to handle these issues and offers a solution by combining the benefits from mobile applications such as feedback and ease of acquisition, with the usability that wearable devices provide, into a smartphone Android application. Data collected from a single user while performing a series of common daily activities namely walking, jogging, cycling, climbing stairs, and walking downstairs, was used to classify and provide an automatic identification of these activities with an overall accuracy of 91%, and identifying the stairs activities with an accuracy of 81%. Finally, the caloric expenditure, which we considered the most important metric for motivating a user to perform a physical activity, was estimated by following the oxygen consumption equations from the American College of Sports Medicine (ACSM).
  • Item
    Thumbnail Image
    Protecting organizational knowledge: a strategic perspective framework
    DEDECHE, AHMED ( 2014)
    Organizational knowledge is considered a valuable resource for providing competitive advantage. Extensive research has been done on strategies to encourage knowledge creation and sharing. However, limited research has been done on strategies for protecting this valuable resource from the risk of leakage. This research aims to contribute in bridging this gap by two contributions: developing a model that describes knowledge leakage, and providing a framework of strategies for protecting competitive organisational knowledge. The research is grounded on two bodies of literature: Knowledge management and information security. The research aims for identifying security strategies in literature and adapting them to address knowledge protection needs.
  • Item
    Thumbnail Image
    A secure innovation process for start-ups: Minimising knowledge leakage and protecting IP
    Pitruzzello, Sam ( 2016)
    Failing to profit from innovations as a result of knowledge leakage is a key business risk for high-tech start-ups. Innovation is central to the success of a start-up and their competitive advantage in the market place therefore methods to protect intellectual property (IP) and minimise knowledge leakage is crucial. However, high-tech start-ups have limited resources rendering them more vulnerable to knowledge leakage risks compared to mature enterprises. Unfortunately, research on knowledge leakage and innovation processes falls short of addressing the needs of high-tech start-ups. Since knowledge leakage can occur in a number of ways involving many scenarios, organisations typically employ a variety of IP protection and knowledge leakage mitigation methods to minimise the risks. This minor thesis fills the research gaps on innovation processes and knowledge leakage for start-ups. A literature review was conducted into the bodies of research on knowledge leakage and innovation. Following the literature review, a secure innovation process (SIP) model was developed from the research. SIP includes the concept of the risk window which allows a start-up to identify, assess and manage knowledge leakage risks at various stages in the innovation process.
  • Item
    Thumbnail Image
    An exploratory study of information security auditing
    Kudallur Ramanathan, Ritu lakshmi ( 2016)
    Management of Information security in organizations is a form of risk management where threats to information assets are managed by implementing various controls. An important task in this cycle of Information Security risk management is Audit, whose function is to provide assurance to organizations that their security controls are indeed working as intended. Numerous frameworks and guidelines are available for auditing Information security. However, there is scant empirical evidence for the process followed in practice. This research explores how security audits are conducted in practice. In order to do so, a qualitative study is conducted where 11 auditors are interviewed. The findings indicate a gap between what is expected of audit and what actually happens in practice. On exploring the Accounting roots of audit, we postulate that this gap is due to the differences in conceptualization of risk between the Accounting and Information Security discipline.
  • Item
    Thumbnail Image
    On the predictability and efficiency of cultural markets with social influence and position biases
    Abeliuk Kimelman, Andrés ( 2016)
    Every day people make a staggering number of decisions about what to buy, what to read and where to eat. The interplay between individual choices and collective opinion is responsible for much of the observed complexity of social behaviors. The impact of social influence on the behavior of individuals may distort the quality perceived by the customers, making quality and popularity out of sync. Understanding how people respond to this information will enable us to predict social behavior and even steer it towards desired goals. In this thesis, we take that step forward by studying how and to what extent one can optimize cultural markets to reduce the unpredictability and improve the efficiency of the market. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. We show, experimentally and theoretically, that social influence can help detect correctly high-quality products and that much of its induced unpredictability can be controlled. We study a dynamic process in which choices are affected by social influence and by the position in which products are displayed. This model is used to explore the evolution of cultural markets under different policies on how items are displayed. We show that in the presence of social signals, by leveraging the position effects, one can increase the expected profit and reduce the unpredictability in cultural markets. In particular, we propose two policies for displaying products and prove that the limiting distribution of market shares converges to a monopoly for the product of highest quality, making the market both optimal and predictable asymptotically. Finally, we put to experimental test our theoretical results and show a policy that mitigates the disparities between popularity and quality that emerge from social and position biases. We report results on a randomized social experiment that we conducted online. The experiment consisted of a web interface displaying science news articles that participants can read and later recommend. We evaluated different policies for presenting items to people and measure their impact on the unpredictability of the market. Our results provide a unique insight into the impact of policy decisions for displaying the products in the dynamics of cultural markets.
  • Item
    Thumbnail Image
    Unsupervised all-words sense distribution learning
    Bennett, Andrew ( 2016)
    There has recently been significant interest in unsupervised methods for learning word sense distributions, or most frequent sense information, in particular for applications where sense distinctions are needed. In addition to their direct application to word sense disambiguation (WSD), particularly where domain adaptation is required, these methods have successfully been applied to diverse problems such as novel sense detection or lexical simplification. Furthermore, they could be used to supplement or replace existing sources of sense frequencies, such as SemCor, which have many significant flaws. However, a major gap in the past work on sense distribution learning is that it has never been optimised for large-scale application to the entire vocabularies of a languages, as would be required to replace sense frequency resources such as SemCor. In this thesis, we develop an unsupervised method for all-words sense distribution learning, which is suitable for language-wide application. We first optimise and extend HDP-WSI, an existing state-of-the-art sense distribution learning method based on HDP topic modelling. This is mostly achieved by replacing HDP with the more efficient HCA topic modelling algorithm in order to create HCA-WSI, which is over an order of magnitude faster than HDP-WSI and more robust. We then apply HCA-WSI across the vocabularies of several languages to create LexSemTm, which is a multilingual sense frequency resource of unprecedented size. Of note, LexSemTm contains sense frequencies for approximately 88% of polysemous lemmas in Princeton WordNet, compared to only 39% for SemCor, and the quality of data in each is shown to be roughly equivalent. Finally, we extend our sense distribution learning methodology to multiword expressions (MWEs), which to the best of our knowledge is a novel task (as is applying any kind of general-purpose WSD methods to MWEs). We demonstrate that sense distribution learning for MWEs is comparable to that for simplex lemmas in all important respects, and we expand LexSemTm with MWE sense frequency data.
  • Item
    Thumbnail Image
    Simulation of whole mammalian kidneys using complex networks
    Gale, Thomas ( 2016)
    Modelling of kidney physiology can contribute to understanding of kidney function by formalising existing knowledge into mathematical equations and computational procedures. Modelling in this way can suggest further research or stimulate theoretical development. The quantitative description provided by the model can then be used to make predictions and identify further areas for experimental or theoretical research, which can then be carried out, focusing on areas where the model and reality are different, creating an iterative process of improved understanding. Better understanding of organ function can contribute to the prevention and treatment of disease, as well as to efforts to engineer artificial organs. Existing research in the area of kidney modelling generally falls into one of three categories: • Morphological and anatomical models that describe the form and structure of the kidney • Tubule and nephron physiological models that describe the function of small internal parts of the kidney • Whole kidney physiological models that describe aggregate function but without any internal detail There is little overlap or connection between these categories of kidney models as they currently exist. This thesis brings together these three types of kidney models by computer generating an anatomical model using data from rat kidneys, simulating dynamics and interactions using the resulting whole rat kidney model with explicit representation of each nephron, and comparing the simulation results against physiological data from rats. This thesis also describes methods for simulation and analysis of the physiological model using high performance computer hardware. In unifying the three types of models above, this thesis makes the following contributions: • Development of methods for automated construction of anatomical models of arteries, nephrons and capillaries based on rat kidneys. These methods produce a combined network and three-dimensional euclidean space model of kidney anatomy. • Extension of complex network kidney models to include modelling of blood flow in an arterial network and modelling of vascular coupling communication between nephrons using the same arterial network. • Development of methods for simulation of kidney models on high performance computer hardware, and storage and analysis of the resulting data. The methods used include multithreaded parallel computation and GPU hardware acceleration. • Analysis of results from whole kidney simulations explicitly modelling all nephrons in a rat kidney, including comparison with animal data at both whole organ level and the nephron level. Analysis methods that bring together the three dimensional euclidean space representation of anatomy with the complex network used for simulation are developed and applied. • Demonstration that the computational methods presented are able to scale up to the quantities of nephrons found in human kidneys.