 Computing and Information Systems  Theses
Computing and Information Systems  Theses
Permanent URI for this collection
33 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1  10 of 33

ItemPractical declarative debugging of mercury programsMacLarty, Ian Douglas. (University of Melbourne, 2006)

ItemPractical declarative debugging of mercury programsMacLarty, Ian Douglas. (University of Melbourne, 2006)

ItemA multistage computer model of picture scanning, image understanding, and environment analysis, guided by research into human and primate visual systemsRogers, T. J. (University of Melbourne, Faculty of Engineering,, 1983)This paper describes the design and some testing of a computational model of picture scanning and image understanding (TRIPS), which outputs a description of the scene in a subset of English. This model can be extended to control the analysis of a three dimensional environment and changes of the viewing system's position within that environment. The model design is guided by a summary of neurophysiological, psychological, and psychophysical observations and theories concerning visual perception in humans and other primates, with an emphasis on eye movements. These results indicate that lower level visual information is processed in parallel in a spatial representation while higher level processing is mostly sequential, using a symbolic, post iconic, representation. The emphasis in this paper is on simulating the cognitive aspects of eye movement control and the higher level post iconic representation of images. The design incorporates several subsystems. The highest level control module is described in detail, since computer models Of eye movement which use cognitively guided saccade selection are not common. For other modules, the interfaces with the whole system and the internal computations required are out lined, as existing image processing techniques can be applied to perform these computations. Control is based on a production . system, which uses an "hypothesising" system  a simplified probabilistic associative production system  to determine which production to apply. A framework for an image analysis language (TRIAL), based on "THINGS". and "RELATIONS" is presented, with algorithms described in detail for the matching procedure and the transformations of size, orientation, position, and so On. TRIAL expressions in the productions are used to generate "cognitive expectations" concerning future eye movements and their effects which can influence the control of the system. Models of low level feature extraction, with parallel processing of iconic representations have been common in computer vision literature, as are techniques for image manipulation and syntactic and statistical analysis� Parallel and serial systems have also been extensively investigated. This model proposes an integration Of these approaches using each technique in the domain to which it is suited. The model proposed for the inferotemporal cortex could be also suitable as a model of the posterior parietal cortex. A restricted version of the picture scanning model (TRIPS) has been implemented, which demonstrates the consistency of the model and also exhibits some behavioural characteristics qualitatively similar to primate visual systems. A TRIAL language is shown to be a useful representation for the analysis and description of scenes. key words: simulation, eye movements, computer vision systems, inferotemporal, parietal, image representation, TRIPS, TRIAL.

ItemLinking news and tweetsLin, Xiaojie ( 2016)In recent years, the rise of social media such as Twitter has been changing the way people acquire information. Meanwhile, traditional information sources such as news articles are still irreplaceable. These have led to a new branch of study on understand ing the relationship between news articles and social media posts and fusing information from these heterogeneous sources. In this study, we focus on techniques for linking news and relevant tweets. Specifically, given a set of news articles and a set of tweets, we aim to find a subset of relevant tweets for each article in the news set. We propose a super vised linking algorithm that employs multiple features such as time, named entities and event phrases, which outperforms all the baseline methods in our experiments. Since humans can accurately determine whether a tweet is relevant to a news article, we study two types of crowdsourcing algorithms, namely validation algorithms and amendment algorithms. Validation algorithms put a newstweet pair in the result set only if the pair is validated by the crowd in some way. We propose three validation algorithms, namely Highest, EP and EP+, all of which effectively collect relevant tweets while maintaining a high precision in the experiments. Amendment algorithms first use a machinebased approach to generate an initial set of results, and then improve the result set by asking the crowd informative questions. The amendment algorithms Half , HHalf and LHalf also demonstrate their effectiveness on improving the quality of the result sets in the experi ments. We extend our study to social network subgraph matching algorithms, which can be used to i) improve the performance of our linking algorithms; ii) analyze and extract information from the newstweet network or other social network graphs. We propose an efficient GPU algorithm, which significantly outperforms the stateoftheart CPU al gorithm in terms of running time in our experiments when the workload is heavy.

ItemA bitvector solver based on wordlevel propagationWang, Wenxi ( 2016)Reasoning with bitvectors arises in a variety of applications in verification and cryptography. Michel and Van Hentenryck have proposed an interesting approach to bitvector constraint propagation on the word level. Most of the operations are propagated in constant time, assuming the bitvector fits in a machine word. In contrast, bitvector SMT solvers usually solve bitvector problems by bitblasting, that is, mapping the resulting operations to conjunctive normal form clauses, and using SAT technology to solve them. This also means generating intermediate variables which can be an advantage, as these can be searched on and learnt about. Since each approach has advantages it is important to see whether we can benefit from these advantages by using a wordlevel propagation approach with learning. In this dissertation, we describe an approach to bitvector solving using wordlevel propagation with learning. We provide alternative wordlevel propagators to Michel and Van Hentenryck, and we evaluate the variants of the approach empirically. We also experiment with different approaches to learning and backjumping in the solver. Based on the insights gained, we propose a portfolio solver using machine learning which can enhance stateoftheart solvers. We show that, with careful engineering, a wordlevel propagation based approach can compete with (or complement) bitblasting.

ItemA suite of greedy methods for set cover computationLim, Ching Lih ( 2015)The Set Cover problem is simple to describe: if a collection of sets is given, each containing a subset of a universe of elements, the challenge is to find a smallest (cardinality) subcollection such that the union of the sets in the subcollection is equal to the universe. Since this problem is NPhard, there is no known efficient exact algorithm for nontrivial problem instances. An approximation algorithm is thus employed to construct a nearly optimal solution within reasonable resource bounds. One such algorithm is the greedy approach, which repeatedly adds the set with the greatest number of uncovered elements to the solution until every element in the universe is covered. In this thesis we make several contributions to Set Cover and the greedy approach. First, the approach can be implemented with a priority queue to keep track of set sizes so as to always be able to quickly find a set with the most uncovered elements. However, this implementation is costly in one aspect as some or all sets in the queue must be updated to their correct sizes after adding one set. A variant of the approach, LazyGreedy, is therefore proposed to evaluate sets lazily, reducing that cost. We show that while the worstcase time complexity is worsened, on typical input instances the revised approach implementation is faster than the standard eager greedy implementation, while retaining the same approximation ratio of the greedy method. Second, to process a large instance on secondary storage efficiently, LazyGreedy is reengineered to break an input into a logarithmic number of buckets of related set sizes, and to process each bucket lazily. It has two innovations: automated management of a large number of buckets, and a fast search for a particular bucket. Although the algorithm handles a number of large instances efficiently, it runs more slowly than one benchmark diskfriendly algorithm that has a weaker approximation ratio. Finally, we propose two inputprocessing algorithms of the same following task given that some sets in a typical problem instance have hapax legomena (singleton elements). These sets must appear in every valid solution, including the optimal solution, because the hapax legomena will not be covered otherwise. A partial solution of such sets can be prebuilt, with remaining sets and elements passed to a Set Cover algorithm to solve and complete the solution. The inmemory algorithm of counting element occurrences produces complete solutions faster than the direct approach of solving without the preprocessing. The diskoriented algorithm of transforming an instance into a list of setelement items solves slower than a semiexternal covering algorithm due to the relatively expensive cost of sorting the list.

ItemReliable power transmission networksGUMUCIOESCOBAR, RODRIGO ( 2015)The thesis presents a study of models and algorithms for the expansion of power system networks. The study indicates that the traditional DC model is not appropriate for network expansion, while ACbased heuristics may be far from optimal. The study also shows that a network expansion algorithm based on the LPAC model provides highquality primal solutions (or configurations with minor violations) on standard benchmarks in reasonable time.

ItemLocation proof architecturesSENEVIRATNE, JANAKA ( 2014)Upcoming location based services such as payasyoudrive insurances mandate verified locations. To enable such services Location Proof Architectures (LPAs) have been proposed in literature to verify or prove a user location. Specifically, an LPA allow a user (or a device on behalf of its user) to obtain a proof of its presence at a location from a trusted third party. In addition to guarding against cheating users who may claim false locations, another major concern in an LPA is to preserve user location privacy. To achieve this a user's identity and location data should be maintained separately in tandem with additional measures that avoid leaking sensitive identity and location data. We identify two types of location proof architectures: 1. sporadic location proofs for specific user locations and 2. continuous location proofs for user routes. In this thesis, we present two sporadic LPAs. Firstly, we propose an LPA where a user cannot falsely claim a location. Also, this LPA preserves user privacy by verifying a user identity and a location independently. Secondly, we propose an LPA that uses pseudonyms. We present a trusted third party free group pseudonym registering system for the LPA and show that our approach can achieve a guaranteed degree of privacy in the LPA. This thesis also introduces a framework for continuous LPA. In a continuos LPA, a verifier receives a sequence of location samples on a user route and assigns a degree of confidence with each possible user route. Specifically, we explain a stochastic model which associates a degree of confidence with a user route based on the distribution pattern of location samples.

ItemUser centric cellular trajectory inference with partial knowledgePERERA, BATUGAHAGE KUSHANI ANURADHA ( 2014)The uncertainty associated with cellular network trajectories is a major problem for their use in location based applications such as person tracking. Inferring the real trajectory of a person, using highly imprecise cellular trajectory data, is a challenging task. GPS trajectories are subjected to less uncertainty compared to cellular network trajectories and are preferred over cellular network trajectories, for many location based applications. However, GPS based location acquisition has limited applicability for certain contexts due to high power consumption and poor coverage. Cellular network based location acquisition is therefore a good alternative for GPS in such scenarios. Consequently, a cellular trajectory inference method which can handle the uncertainty of cellular trajectories is promising to investigate, such that cellular trajectories can be utilised in location based applications. In this thesis, our main focus is on user centric trajectory inference approaches, where the trajectory inference is performed by mobile phone users rather than the mobile network operator. Many existing cellular trajectory inference methods use knowledge about the cellular network such as the spatial distribution of neighbouring cell towers and signal strength information. However, this full knowledge about the cellular network is confidential to the mobile network operator, and mobile phone users are not guaranteed to have access to such information. Therefore, these techniques are not applicable for user centric cellular trajectory inference with partial knowledge about the cellular network. Therefore, user centric approaches for cellular trajectory inference are even more challenging. We propose a cellular trajectory inference method which utilises only a user’s connected cell tower location sequence and corresponding timing information, as this is the only type of knowledge guaranteed for a mobile phone user. We suggest using a user’s speed information as background knowledge, as it is easily accessible by the user, compared to knowledge about the cellular network. Furthermore, we suggest exploiting the preciseness of the time dimension of cellular trajectories to obtain precise handover times. These precise handover times can be used with speed information to accurately compute the distance, a user has travelled within a cell. We propose a method to infer the straight line segments of a trajectory, using above distance information. The inferred straight line trajectory segments are later used to estimate other segments of the trajectory. We theoretically and experimentally show that our proposed method achieves higher accuracy than existing cellular trajectory inference methods, for cases where the user’s trajectory tends to be a combination of straight lines. The intuition behind straight line inference is that people follow the shortest path to reach a destination avoiding unnecessary turns and therefore often prefer to select a straight line path. Additional advantages of our proposed inference method include the ability to locally run on mobile devices and the ability to perform trajectory inference within an unfamiliar environment, since no historical trajectory information or pretraining is required.

ItemRapid de novo methods for genome analysisHALL, ROSS STEPHEN ( 2013)Next generation sequencing methodologies have resulted in an exponential increase in the amount of genomic sequence data available to researchers. Valuable tools in the initial analysis of such data for novel features are de novo techniques  methods which employ a minimum of comparative sequence information from known genomes. In this thesis I describe two heuristic algorithms for the rapid de novo analysis of genomic sequence data. The ﬁrst algorithm employs a multiple Fast Fourier Transform, mapped to two dimensional spaces. The resulting bitmap clearly illustrates periodic features of a genome including coding density. The compact representation allows mega base scales of genomic data to be rendered in a single bitmap. The second algorithm RTASSS, (RNA Template Assisted Secondary Structure Search) predicts potential members of RNA gene families that are related by similar secondary structure, but not necessarily conserved sequence. RTASSS has the ability to ﬁnd candidate structures similar to a given template structure without the use of sequence homology. Both algorithms have a linear complexity.