Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 5 of 5
  • Item
    Thumbnail Image
    Enhancing Deep Multimodal Representation: Online, Noise-robust and Unsupervised Learning
    Silva, Dadallage Amila Ruwansiri ( 2022)
    Information that is generated and shared today uses data that involves different modalities. These multimodalities are not limited to the well-known sensory media (e.g., text, image, video, and audio), but could be any abstract or inferred form of encoding information (e.g., propagation network of a news article and sentiment of a text) that represents a different viewpoint of the same object. For machine learning models to be competitive with humans, they should be able to extract and combine information from these modalities. Thus, multimodal representation learning has emerged as a broad research domain that aims to understand complex multimodal environments while narrowing the heterogeneity gap among different modalities. Due to the potential of representing latent information in complex data structures, deep learning-based techniques have recently attracted much attention for multimodal representation learning. Nevertheless, most existing deep multimodal representation learning techniques lack the following: (1) ability to continuously learn and update representations in a memory-efficient manner while being recency-aware and avoiding catastrophic forgetting of historical knowledge; (2) ability to learn unsupervised representations for under-exploited multimodalities with complex data structures (i.e., temporally evolving networks) and high diversity (cross-domain multimodal data); and (3) ability to directly serve as features to address various real-world applications without fine-tuning using an application-specific labelled dataset. This thesis aims to bridge these research gaps in deep multimodal representation learning approaches. In addition, this thesis addresses real-world applications involving multimodal data such as misinformation detection, spatiotemporal activity modeling and online market basket analysis. The main contributions of this thesis include: (1) proposing two novel online learning strategies for learning deep multimodal representations, and proposing two frameworks using the proposed online learning strategies to address two real-world applications -- i.e., user-guided spatiotemporal activity modeling (USTAR) and online market basket analysis (OMBA); (2) proposing METEOR, a memory and time efficient online representation learning algorithm for making deep multimodal representations compact and scalable to cope with the different data rates of real-world multimodal data streams; (3) developing an unsupervised framework to capture and preserve domain-specific and domain-shared knowledge in cross-domain data streams, and applying the proposed framework to address cross-domain fake news detection; (4) proposing an unsupervised model to learn representations for temporally evolving graphs by mimicking the future knowledge of an evolving graph at an early timestep, and developing a new framework called Propagation2Vec with the help of the proposed objective functions for fake news early detection; and (5) developing a theoretically-motivated noise-robust unsupervised learning framework, which can filter out the noise (i.e., fine-tune) in multimodal representations learned from general pretraining objective functions without requiring a labelled dataset, and applying the findings to address the unsupervised fake news detection task.
  • Item
    Thumbnail Image
    Improving Agile Sprint Planning Through Empirical Studies of Documented Information and Story Points Estimation
    Pasuksmit, Jirat ( 2022)
    In Agile iterative development (e.g., Scrum), effort estimation is an integral part of the development iteration planning (i.e., sprint planning). Unlike traditional software development teams, an Agile team relies on a lightweight estimation method based on the team consensus (e.g., Planning Poker) for effort estimation and the estimated effort is continuously refined (or changed) to improve the estimation accuracy. However, such lightweight estimation methods are prone to be inaccurate and late changes of the estimated effort may cause the sprint plan to become unreliable. Despite a large body of research, only few studies have reviewed the reasons for inaccurate estimations and the approaches to improve effort estimation. We conducted a systematic literature review and found that the quality of the available information is one of the most common reasons for inaccurate estimations. We found several manual approaches aim to help the team improve the information quality and manage the uncertainty in effort estimation. However, prior work reported that the practitioners were reluctant to use them as they added additional overhead to the development process. The goals of this thesis are to better understand and propose the approaches that help the team achieves accurate estimation without introducing additional overhead. To achieve this goal, we conducted studies in this thesis in two broad areas. We first conducted two empirical studies to investigate the importance of documented information for effort estimation and the impact of estimation changes in a project. In the first empirical study, we aim to investigate the importance and quality of documented information for effort estimation. We conducted a survey study with 121 Agile practitioners from 25 countries. We found that the documented information is considered important for effort estimation. We also found that the useful documented information for effort estimation is often changed and the practitioners would re-estimate effort when the change of documented information occurred, even after the work had started. In the second empirical study, we aim to better understand the change of effort (in Story Points unit; SP). We examined the prevalence of SP changes, the accuracy of changed SP, and the impact of information changes on SP changes. We found that the SP were not often changed after sprint planning. However, when the SP were changed, the changing size was relatively large and the changed SP may be inaccurate. We also found that the SP changes were often occurred along with the information changes for scope modification. These findings suggest that a change of documented information could lead to a change of effort, and the changed effort could have a large impact on the sprint plan. To mitigate the risk of an unreliable sprint plan, the documented information and the estimated effort should be verified and stabilized before finalizing the sprint plan. Otherwise, the team may have to re-estimate the effort and adjust the sprint plan. However, revisiting all documented information and estimated SP could be a labor-intensive task and may not comply with the Agile principles. To help the team manages these uncertainties without introducing additional overhead, we proposed the automated approaches called DocWarn and SPWarn to predict the documentation changes and SP changes that may occur after sprint planning. We built DocWarn and SPWarn using machine learning and deep learning techniques based on the metrics that measure the characteristics of the work items. We evaluated DocWarn and SPWarn using the work items extracted from the open-source projects. Our empirical evaluations show that DocWarn achieved an average AUC of 0.75 and SPWarn achieved an average AUC of 0.73, which are significantly higher than baseline models. These results suggest that our approaches can predict future changes of documented information and SP based on the currently-available information. With our approaches, the team will be better aware and pay attention to the potential documentation changes and SP changes during sprint planning. Thus, the team can manage uncertainty and reduce the risk of unreliable effort estimation and sprint planning without additional overhead.
  • Item
    Thumbnail Image
    A Novel Perspective on Robustness in Deep Learning
    Mohaghegh Dolatabadi, Hadi ( 2022)
    Nowadays, machine learning plays a crucial role in our path toward automated decision-making. Traditional machine learning algorithms would require careful, often manual, feature engineering to deliver satisfactory results. Deep Neural Networks (DNNs) have shown great promise in an attempt to automate this process. Today, DNNs are the primary candidate for various applications, from object detection to high-dimensional density estimation and beyond. Despite their impressive performance, DNNs are vulnerable to different security threats. For instance, in adversarial attacks, an adversary can alter the output of a DNN for its benefit by adding carefully crafted yet imperceptible distortions to clean samples. As another example, in backdoor (Trojan) attacks, an adversary intentionally plants a loophole in the DNN during the learning process. This is often done via attaching specific triggers to the benign samples during training such that the model creates an association between the trigger and a particularly intended output. Once such a loophole is planted, the attacker can activate the backdoor with the learned triggers and bypass the model. All these examples demonstrate the fragility of DNNs in their decision-making, which questions their widespread use in safety-critical applications such as autonomous driving. This thesis studies these vulnerabilities in DNNs from novel perspectives. To this end, we identify two key challenges in the previous studies around the robustness of neural networks. First, while a plethora of existing algorithms can robustify DNNs against attackers to some extent, these methods often lack the efficiency required for their use in real-world applications. Second, the true nature of these adversaries has been less studied, leading to unrealistic assumptions about their behavior. This is particularly crucial as building defense mechanisms using such assumptions would fail to address the underlying threats and create a false belief in the security of DNNs. This thesis studies the first challenge in the context of robust DNN training. In particular, we leverage the theory of coreset selection to form informative weighted subsets of data. We use this framework in two different settings. First, we develop an online algorithm for filtering poisonous data to prevent backdoor attacks. Specifically, we identify two critical properties of poisonous samples based on their gradient space and geometrical representation and define an appropriate selection objective based on these criteria to select clean samples. Second, we extend the idea of coreset selection to adversarial training of DNNs. Although adversarial training is one of the most effective methods in defending DNNs against adversarial attacks, it requires generating costly adversarial examples for each training sample iteratively. To ease the computational burden of various adversarial training methods in a unified manner, we build a weighted subset of the training data that can faithfully approximate the DNN gradient. We show how our proposed solution can lead to robust neural network training more efficiently in both of these scenarios. Then, we touch upon the second challenge and question the validity of one of the widely used assumptions around adversarial attacks. More precisely, it is often assumed that adversarial examples stem from an entirely different distribution than clean data. To challenge this assumption, we resort to generative modeling, particularly Normalizing Flows (NF). Using an NF model pre-trained on clean data, we demonstrate how one can create adversarial examples closely following the clean data distribution. We then use our approach against state-of-the-art adversarial example detection methods to show that methods that explicitly assume a difference in the distribution of adversarial attacks vs. clean data might greatly suffer. Our study reveals the importance of correct assumptions in treating adversarial threats. Finally, we extend the distribution modeling component of our adversarial attacker to increase its density estimation capabilities. In summary, this thesis advances the current state of robustness in deep learning by i) proposing more effective training algorithms against backdoor and adversarial attacks and ii) challenging a fundamental prevalent misconception about the distributional properties of adversarial threats. Through these contributions, we aim to help create more robust neural networks, which is crucial before their deployment in real-world applications. Our work is supported by theoretical analysis and experimental investigations based on publications.
  • Item
    Thumbnail Image
    Energy and Time Aware Scheduling of Applications in Edge and Fog Computing Environments
    Goudarzi, Mohammad ( 2022)
    The Internet of Things (IoT) paradigm is playing a principal role in the advancement of many application scenarios such as healthcare, smart city, transportation, entertainment, and agriculture, which significantly affect the daily life of humans. The smooth execution of these applications requires sufficient computing and storing resources to support the massive amount of data generated by IoT devices. However, IoT devices are resource-limited intrinsically and are not capable of efficient processing and storage of large volumes of data. Hence, IoT devices require surrogate available resources for the smooth execution of their heterogeneous applications, which can be either computation-intensive or latency-sensitive. Cloud datacenters are among the potential resource providers for IoT devices. However, as they reside at a multi-hop distance from IoT devices, they cannot efficiently execute IoT applications, especially latency-sensitive ones. Fog computing paradigm, which extends Cloud services to the edge of the network within the proximity of IoT devices, offers low latency execution of IoT applications. Hence, it can improve the response time of IoT applications, service startup time, and network congestion. Also, it can reduce the energy consumption of IoT devices by minimizing their active time. However, Fog servers are resource-limited compared to Cloud servers, preventing them from the execution of all types of IoT applications, especially extremely computation-intensive applications. Hence, Cloud servers are used to support Fog servers to create a robust computing environment with heterogeneous types of resources. Consequently, the Fog computing paradigm is highly dynamic, distributed, and heterogeneous. Thus, without efficient scheduling techniques for the management of IoT applications, it is difficult to harness the full potential of this computing paradigm for different IoT-driven application scenarios. This thesis focuses on different scheduling techniques for the management of IoT applications in Fog computing environments while considering: a) IoT devices' characteristics, b) the structure of IoT applications, c) the context of resource providers, d) the networking characteristics of the Fog servers, e) the execution cost of running IoT applications, and f) the dynamics of computing environment. This thesis advances the state-of-the-art by making the following contributions: 1. A comprehensive taxonomy and literature review on the scheduling of IoT applications from different perspectives, namely application structure, environmental architecture, optimization properties, decision engine characteristics, and performance evaluation, in Fog computing environments. 2. A distributed Fog-driven scheduling technique for network resource allocation in dense and ultra-dense Fog computing environments to optimize throughput and satisfy users' heterogeneous demands. 3. A distributed scheduling technique for the batch placement of concurrent IoT applications to optimize the execution time of IoT applications and energy consumption of IoT devices. 4. A distributed application placement and migration management technique to optimize the execution time of IoT applications, the energy consumption of IoT devices, and the migration downtime in hierarchical Fog computing environments. 5. A Distributed Deep Reinforcement Learning (DDRL) technique for scheduling IoT applications in highly dynamic Fog computing environments to optimize the execution time of IoT applications and energy consumption of IoT devices. 6. A system software for scheduling IoT applications in multi-Cloud Fog computing environments. 7. A detailed study outlining challenges and new research directions for the scheduling of IoT applications in Fog computing environments.
  • Item
    Thumbnail Image
    Explainable Reinforcement Learning Through a Causal Lens
    Mathugama Babun Appuhamilage, Prashan Madumal ( 2021)
    This thesis investigates methods for explaining and understanding how and why reinforcement learning agents select actions, from a causal perspective. Understanding the behaviours, decisions and actions exhibited by artificially intelligent agents has been a central theme of interest since the inception of agent research. As systems grow in complexity, the agents' underlying reasoning mechanisms can become opaque and the intelligibility towards humans can be diminished, which can have negative consequences in high-stakes and highly-collaborative domains. The explainable agency of an autonomous agent can aid in transferring the knowledge of this reasoning process to the user to improve intelligibility. If we are to build effective explainable agency, a careful inspection of how humans generate, select and communicate explanations is needed. Explaining the behaviour and actions of sequential decision making reinforcement learning (RL) agents introduces challenges such as handling long-term goals and rewards, in contrast to one-shot explanations in which the attention of explainability literature has largely focused. Taking inspirations from cognitive science and philosophy literature on the nature of explanation, this thesis presents a novel explainable model ---action influence models--- that can generate causal explanations for reinforcement learning agents. A human-centred approach is followed to extend action influence models to handle distal explanations of actions, i.e. explanations that present future causal dependencies. To facilitate an end-to-end explainable agency, an action influence discovery algorithm is proposed to learn the structure of the causal relationships from the RL agent's interactions. Further, a dialogue model is also introduced, that can instantiate the interactions of an explanation dialogue. The original work presented in this thesis reveals how a causal and human-centred approach can bring forth a strong explainable agency in RL agents.