Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 450
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    A multistage computer model of picture scanning, image understanding, and environment analysis, guided by research into human and primate visual systems
    Rogers, T. J. (University of Melbourne, Faculty of Engineering,, 1983)
    This paper describes the design and some testing of a computational model of picture scanning and image understanding (TRIPS), which outputs a description of the scene in a subset of English. This model can be extended to control the analysis of a three dimensional environment and changes of the viewing system's position within that environment. The model design is guided by a summary of neurophysiological, psychological, and psychophysical observations and theories concerning visual perception in humans and other primates, with an emphasis on eye movements. These results indicate that lower level visual information is processed in parallel in a spatial representation while higher level processing is mostly sequential, using a symbolic, post iconic, representation. The emphasis in this paper is on simulating the cognitive aspects of eye movement control and the higher level post iconic representation of images. The design incorporates several subsystems. The highest level control module is described in detail, since computer models Of eye movement which use cognitively guided saccade selection are not common. For other modules, the interfaces with the whole system and the internal computations required are out lined, as existing image processing techniques can be applied to perform these computations. Control is based on a production . system, which uses an "hypothesising" system - a simplified probabilistic associative production system - to determine which production to apply. A framework for an image analysis language (TRIAL), based on "THINGS". and "RELATIONS" is presented, with algorithms described in detail for the matching procedure and the transformations of size, orientation, position, and so On. TRIAL expressions in the productions are used to generate "cognitive expectations" concerning future eye movements and their effects which can influence the control of the system. Models of low level feature extraction, with parallel processing of iconic representations have been common in computer vision literature, as are techniques for image manipulation and syntactic and statistical analysis� Parallel and serial systems have also been extensively investigated. This model proposes an integration Of these approaches using each technique in the domain to which it is suited. The model proposed for the inferotemporal cortex could be also suitable as a model of the posterior parietal cortex. A restricted version of the picture scanning model (TRIPS) has been implemented, which demonstrates the consistency of the model and also exhibits some behavioural characteristics qualitatively similar to primate visual systems. A TRIAL language is shown to be a useful representation for the analysis and description of scenes. key words: simulation, eye movements, computer vision systems, inferotemporal, parietal, image representation, TRIPS, TRIAL.
  • Item
    Thumbnail Image
    Concept-based Decision Tree Explanations
    Mutahar, Gayda Mohameed Q. ( 2021)
    This thesis evaluates whether training a decision tree based on concepts extracted from a concept-based explainer can increase interpretability for Convolutional Neu- ral Networks (CNNs) models and boost the fidelity and performance of the used explainer. CNNs for computer vision have shown exceptional performance in crit- ical industries. However, it is a significant barrier when deploying CNNs due to their complexity and lack of interpretability. Recent studies to explain computer vision models have shifted from extracting low-level features (pixel-based expla- nations) to mid-or high-level features (concept-based explanations). The current research direction tends to use extracted features in developing approximation al- gorithms such as linear or decision tree models to interpret an original model. In this work, we modify one of the state-of-the-art concept-based explanations and propose an alternative framework named TreeICE. We design a systematic evaluation based on the requirements of fidelity (approximate models to origi- nal model’s labels), performance (approximate models to ground-truth labels), and interpretability (meaningful of approximate models to humans). We conduct computational evaluation (for fidelity and performance) and human subject ex- periments (for interpretability). We find that TreeICE outperforms the baseline in interpretability and generates more human-readable explanations in the form of a semantic tree structure. This work features how important to have more understandable explanations when interpretability is crucial.
  • Item
    No Preview Available
    Straggler Mitigation in Distributed Behavioural Simulations
    Bin Khunayn, Eman Abdulaziz A ( 2021)
    Behavioural simulation (B-SIM) has been widely used to understand real-world phenomena. Running such a large-scale simulation requires high computational power, which can be provided through parallel distributed computing. To preserve the correct behaviour in the system, the implementations follow the bulk synchronous parallel (BSP) computational model. The processing in BSP is divided into iterations running on multiple machines/workers, followed by a global barrier synchronisation. Unfortunately, the BSP model is plagued by the straggler problem, where a delay in any process at any iteration leads to a slowdown in the entire simulation. Stragglers may occur due to many reasons, including imbalanced workload distribution or communication and synchronisation delays. Moreover, this problem is further exacerbated by the system scaling up. Industry and academia have been working on addressing the straggler problem in distributed systems for a long time. Though mitigating the straggler effect in distributed B-SIM is very challenging, it is fundamental to improving the system performance, cost, and utilisation. This thesis presents straggler mitigation techniques in distributed B-SIM. It focuses on reducing the stragglers’ effect caused by computation, communication, and synchronisation delays in the BSP model that run on shared-nothing architectures. This thesis proposes novel techniques to mitigate the effect of computation and communication stragglers caused by computation and communication delays at the application level. The thesis makes the following key contributions to maximise B-SIM performance: - A GridGraph load balancing and partitioning method that reduces the computation stragglers by balancing the workload distribution in B-SIM among multiple workers. - The Priority Synchronous Parallel (PSP) model, a novel parallel computational model that exploits data dependencies to reduce the communication straggler’s effect. - The Priority Asynchronous Parallel (PAP) computational model that utilises the data dependencies further to reduce computation in addition to the communication straggler’s effect. - Neighbourhood Diffusion (NDiff), a dynamic load balancing (DLB) algorithm to dynamically mitigate computation stragglers caused by imbalanced workload distribution during the runtime. - The On-demand Dynamic Resource Allocation (On-demand DRA) method that dynamically allocates and releases computing resources (workers) to mitigate computation stragglers caused by highly imbalanced workload distribution. All proposed algorithms are implemented and evaluated using a microscopic traffic simulator, SMARTS [1] as an example of a B-SIM running a traffic simulation of three big cities—Melbourne, Beijing, and New York—with different traffic and straggler scenarios.
  • Item
    Thumbnail Image
    Traffic Optimization in The Era of Connected Autonomous Vehicles
    Motallebi, Sadegh ( 2021)
    Traffic optimization, especially at large scale, is an important and challenging problem due to the highly dynamic and unpredictable nature of traffic. The incoming era of connected autonomous vehicles (CAVs) offers a great opportunity for traffic optimization. The routes, which are followed by CAVs, can be assigned by a centralized system with better predictability. By collecting highly detailed real-time traffic data from sensors and CAVs, a traffic management system will have the full view of the entire road network, allowing it to plan traffic in a virtual world that replicates the real road network very closely. This thesis presents the problem of route assignment for CAVs, with the aim of optimizing traffic at network-level. We formulate the research problem as a streaming route assignment problem and address the intersecting routes which is a key reason causing delays and long queues at junctions. We focus on finding road network routes with few intersections for vehicles. We propose two route assignment algorithms to solve the problem, Local Detour Algorithm (LDA) and Multiple Intersection Reduction Algorithm (MIRA). LDA uses traffic information at road links or road junctions during route allocation, while MIRA enlarges the scope of traffic information to have a big picture of traffic conditions. LDA is an efficient algorithm due to searching detours in a small area around congested junctions. On the other hand, MIRA is more complicated as it searches for longer detours from not only congested junctions but also their surrounding junctions that will be congested in the near future. The results show that the route allocator can decrease traffic congestion significantly by utilizing MIRA. By assigning new routes on roads, traffic conditions can change significantly after a certain time, which means that the real-time traffic conditions will not be an accurate estimate of the near future. What if the route allocator can predict the effect of route assignments on roads? Traffic congestion prediction enables the route allocator to realize traffic conditions accurately. For a specific type of traffic road network in which intersections are signalized, we propose a predictive congestion model by modifying a real-time queue-based congestion model. The model predicts traffic congestion for each road link based on the corresponding number of route allocations. We also propose a route assignment algorithm, Traffic Congestion Aware Route Assignment Algorithm (TCARA), which uses the predictive congestion model. TCARA is designed for a specific type of road networks, while MIRA is a general route assignment algorithm with no prior assumption on road intersections (i.e., whether they are signalized or not), making it a good candidate for a wider range of road networks. Another reason that can change traffic conditions is varying traffic demand over time. Thus, if a route is optimized based on the current traffic conditions while the traffic demand is not stable, the route may be ineffective. What happens if prior temporal traffic data for particular traffic conditions of a road network is available? Temporal traffic data demonstrates historical traffic conditions collected at regular time intervals. The route allocator can be aware of changes in traffic demand that may occur in the near future. So, it can optimize traffic furthermore by having access to such data. We propose a new route assignment algorithm, Temporal Multiple Intersection Reduction Algorithm (T-MIRA), by extending MIRA to leverage prior temporal traffic data for reducing traffic congestions. In this thesis, MIRA is the main contribution. We extend it for situations when traffic conditions evolve due to the impact of route assignments or change of traffic demand. We investigate two different approaches, using a predictive congestion model in TCARA and using temporal traffic data in T-MIRA. An advanced traffic management system may utilize different approaches and solutions in an integrated way to achieve the best outcome in the real world.
  • Item
    Thumbnail Image
    Avoiding Bad Traffic: Analyzing the Past, Adapting the Present, and Estimating the Future
    Aldwyish, Abdullah Saleh ( 2021)
    Mobility is essential for modern societies. However, due to the increasing demand for mobility, traffic congestion poses a significant challenge to economic growth and advancement for many cities worldwide. At the same time, the widespread availability of location-aware devices has led to a sharp increase in the amount of traffic data generated, thereby, providing an opportunity for intelligent transportation systems to emerge as one of the main cost-effective methods for traffic congestion mitigation. This boost in traffic data has led to a new generation of live navigation services that depend on traffic estimation to provide up-to-date navigation advice. Intelligent transportation systems increase the utilization of existing infrastructure and support drivers to make better navigation decisions by providing actionable traffic information. However, a fundamental shortcoming of existing intelligent navigation systems is that they do not consider the evolution of traffic and route drivers based on snapshot traffic conditions. This is especially critical in the presence of traffic incidents, where the impact of the incident introduces significant variation in the traffic conditions around the incident as things unfold. This thesis proposes three contributions focusing on traffic estimation and forecasting to help drivers avoid bad traffic, especially around traffic incidents. The first contribution is an automated traffic management service to help drivers avoid traffic events based on analyses of historical trajectory data from other drivers. Users subscribe to the service and, when a traffic event occurs, the service provides advice based on all drivers' actions during a similar traffic event in the past. We present a solution that combines a graph search with a trajectory search to find the fastest path that was taken to avoid a similar event in the past. The intuition behind our solution is that we can avoid a traffic event by following the traces of the best driver from a similar situation in the past. The second contribution is a system that uses real-time traffic information and fast traffic simulations to adapt to traffic incident impact and generate navigation advice. In this work, we use faster than real-time simulations to model the evolution of traffic events and help drivers proactively avoid congestion caused by events. The system can subscribe to real-time traffic information and estimate the traffic conditions using fast simulations without the need for historical data. We evaluate our approach through extensive experiments to test the performance and accuracy, and quality of the navigation advice of our system with real data obtained from TomTom Traffic API. For the third contribution, we propose effective deep learning models for large-scale citywide traffic forecasting. In addressing this problem, our goal is to predict traffic conditions for thousands of sites across the city. Such large-scale predictions can be used by navigation systems to help drivers avoid congestion. We propose a traffic forecasting model based on deep convolutional networks to improve the accuracy of citywide traffic forecasting. Our proposed model uses a hierarchical architecture that captures traffic dynamics at multiple spatial resolutions. We apply a multi-task learning scheme based on this architecture, which trains the model to predict traffic at different resolutions. Our model helps provide a coherent understanding of traffic dynamics by capturing short and long spatial dependencies between different regions in a city. Experimental results on real datasets show that our model can achieve competitive results while being more computationally efficient.
  • Item
    Thumbnail Image
    Computational models of visual search and attention in natural images: from neural circuits to search strategies
    Rashidi, Shima ( 2021)
    Humans use visual search constantly in their daily activities, from searching for their keys to looking out for pedestrians while driving. Cortical visual attention and human eye movements are among the mechanisms that help with an effective visual search. In this thesis, we propose a theoretical model of eye movements in a visual search task as well as a computational model of neural mechanisms underlying visual attention. Towards computationally modeling human eye movements, we propose a model of target detectability in natural scenes which can be used by a Bayesian ideal searcher. The model uses a convolutional neural network as a feature extractor for extracting the features of target and background and uses signal detection theory to estimate detectability as the discriminability of the two feature distributions. We collect the detectability of a known target on 18 different backgrounds from human observers in a two-alternative forced-choice task. We use the collected detectabilities for verification and scaling of the ones estimated by our model. We further fed the collected detectabilities to a Bayesian ideal observer to predict human eye movements in a visual search task in natural scenes. We collected human eye movements on the same 18 natural backgrounds in a search for the known target and used it as the ground truth. Our model closely followed some statistical parameters of the collected eye movements, supporting the idea that humans search near-optimal in natural images. We further generalize the detectability model to any target and background and apply the Bayesian ideal observer model on a real-world dataset of pedestrians in various contexts. To the best of our knowledge, we are the first to provide an unsupervised gaze prediction algorithm on natural scenes which uses the Bayesian ideal observer and doesn't need human eye movements for training. The presented model can have potential applications in autonomous drivers. Towards computationally modeling the neural mechanisms of visual attention, we propose a large-scale computational model of attentional enhancement of visual processing, based on the idea of neural oscillations being fed back to attended features or locations in a scene. Our proposed model supports the idea of neural oscillation for mediating spatial attention and applies the same circuit for feature-based attention as well. The presented models can be used in various human gaze assisted Artificial Intelligent systems as well as enhancing our knowledge of the human visual system.
  • Item
    Thumbnail Image
    Scalable contrast pattern mining for network traffic analysis
    Alipourchavary, Elaheh ( 2021)
    Contrast pattern mining is a data mining technique that characterises significant changes between datasets. Contrast patterns identify nontrivial differences between the classes of a dataset, interesting changes between multiple datasets, or emerging trends in a data stream. In this thesis, we focus on the pattern of characterizing changes in Internet traffic using contrast pattern mining. For example, network managers require a compact yet informative report of significant changes in network traffic for security and performance management. In this context, contrast pattern mining is a promising approach to provide a concise and meaningful summary of significant changes in the network. However, the volume and high dimensionality of network traffic records introduce a range of challenges for contrast pattern mining. In particular, these challenges include the combinatorial search space for contrast patterns, the need to mine contrast patterns over data streams, and identifying new changes as opposed to rare recurring changes. In this thesis, we introduce a range of novel contrast mining algorithms to address these challenges. We first introduce the use of contrast pattern mining in network traffic analysis. We show that contrast patterns have strong discriminative power that make them suitable for data summarization, and finding meaningful and important changes between different traffic datasets. We also propose several evaluation metrics that reflect the interpretability of patterns for security managers. In addition, we demonstrate on real-life datasets that the vast majority of extracted patterns are pure, i.e., most change patterns correspond to either attack traffic or normal traffic, but not a mixture of both. We propose a method to efficiently extract contrast patterns between two static datasets.We extract a high-quality set of contrast patterns by using only the most specific patterns to generate a compact and informative report of significant changes for network managers. By elimination of minimal patterns in our approach, we considerably reduce the overlap between generated patterns, and by reducing the redundant patterns, we substantially improve the scalability of contrast pattern mining and achieve a significant speed-up. We also propose a novel approach to discriminate between new events and rare recurring events. Some changes in network traffic occur on a regular basis and show periodic behaviour, which are already known to network analysts. Thus, network managers may want to filter out these known recurring changes, and prioritize their focus on new changes, given their limited time and resources. Our approach to this problem is based on second order contrast pattern mining. Our work demonstrates the importance of higher order contrast pattern mining in practice, and provides an effective method for finding such higher order patterns in large datasets. Finally, based on the approaches that we introduced for contrast pattern mining in static datasets, we then propose two novel methods to extract contrast patterns over high dimensional data streams. We consider two incremental scenarios: (i) when one dataset is changing over time and the other dataset is static as a reference dataset, and (ii) when both datasets are changing over a data stream. In our approach, instead of regenerating all patterns from scratch, we reuse the previously generated contrast patterns wherever possible to mine the new set of patterns. Using this technique, we avoid expensive incremental computation and increase the efficiency and scalability of mining on dense and high dimensional data streams. As a result of this scalability, our method also can find the solutions for datasets where the other algorithms cannot. In addition to the substantial improvements in performance and scalability of our algorithm, we demonstrate that the quality of the generated patterns of our algorithm is quite comparable with the other algorithms. In summary, we propose several efficient contrast pattern mining approaches to extract significant changes between two static datasets or over a data stream. We also introduce a novel approach to identify new changes from the recurring changes. Our experimental results on different real-life datasets demonstrate the improvements in performance of our proposed algorithms compared to the existing approaches.
  • Item
    Thumbnail Image
    The Role of Explanations in Enhancing Algorithmic Fairness Perceptions
    Afrashteh, Sadaf ( 2021)
    Decision-makers employ machine learning algorithms for decision-making to gain insights from data and make better decisions. More specifically, advanced algorithms can help organizations classify their customers and predict their behavior at the highest accuracy levels. However, the opaque nature of those algorithms has raised concerns around some potential unintended consequences they might cause, unfair decisions at the center. Unfair decisions negatively impact on both organizations and customers. Customers may lose their trust in organizations as being treated unfairly and consequently, organizations’ reputations might be put at risk. Transparency provision has been introduced to organizations as a way of addressing the issue of algorithmic opacity. One approach for transparency provision is explaining algorithms’ performance and how they reach a decision to decision-makers. Therefore, explanations can consequently influence the fairness perceptions of the decision-makers about algorithms’ decisions. Understanding how explanations and the way of discoursing them impact on the fairness perceptions of the organizational decision-makers is important. However, little research has focused on the role of explanations in enhancing fairness perceptions. I seek to address this research gap answering the question of: “How does explanation influence decision-makers’ perceptions of fairness connected with an algorithm’s decisions?” I conduct three studies to answer this question. In study 1, I conduct a conceptual study to explore the dimensions of explanations that need to be studied in understanding the impact of explanations on fairness perceptions. In study 2, I develop a research model hypothesizing the role of perspective-taking in discoursing two different explanations with decision-makers in their fairness perceptions. I conducted a 2*2 experiment to test the hypotheses. In study 3, I develop a research model hypothesizing the influence of explanations restrictiveness in the decision’s fairness perceived by the decision-makers. I conducted a 2*2 experiment to test the hypotheses. The findings of this research result in three important insights about explanations and their role in enhancing algorithmic fairness perceptions; first, I propose four dimensions of explanations that need to be considered in understanding fairness perceptions including the content types of explanations, the explanations reasoning logic, the scope of explanations and explanations discourse. Second, taking different perspectives of organization or customer in communicating different types of explanations lead to different impact on the perception of fairness about algorithm’s performance and its made decision. Third, Framing explanations in a less restrictive way creates the space for the decision-makers to be cognitively more engaged with the algorithmic decision-making and practice their own judgment about that which consequently influences on their fairness perceptions.