Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 239
  • Item
    Thumbnail Image
    Factors Motivating Users of Social Media to Engage with Images about Living with Chronic Illness
    Alamri, Hajar Mohammed A ( 2023-04)
    With the increased uptake of social media platforms, new opportunities arise for utilising these platforms to benefit individuals with chronic health issues. In this research, I investigated how and why individuals share and engage with images online that present everyday life with chronic illness. The research consisted of three studies; each investigated a different aspect of engagement with images. Study one investigated user behaviour on social media by analysing images posted in the online community #typeonediabetes. The findings showed that users mostly shared and interacted with images containing faces and medical aids. Study two focused on understanding what motivates participants to engage with images about living with chronic illness. The study collected data through photo-elicitation interviews with 20 active social media users who have type one diabetes or multiple sclerosis. The findings identified six motivations that drive people to share and explore images about life with chronic illness: informational need, social need, emotional need, the desire for transparency, the desire for advocacy, and the impact of hashtags. The study also found that participants were particularly interested in interacting with images that communicated shared identity and emotional content. This finding supported study one findings by providing more insight into why participants engaged with this type of content. Based on the findings from study two, I developed a framework of user engagement with images about chronic illness (UsEnIC) to describe the different roles users may play during the engagement with images on OHCs. Each role is identified by the motives and the benefits of engagement. Study three was an online survey with 297 respondents, which aimed to validate the findings from study two. In this study, five-point Likert scales were used to examine a hypothesised model which describes the relationships between user intention to engage with images and the motivations of seeking support, providing support, transparency, advocacy, and hashtags. The relationships between intention to engage with images and the identified motivations were supported except for transparency. The USENIC framework was modified based on study three findings. This research contributes to knowledge by proposing a USENIC framework that describes why and how users explore and share images specifically presenting life with chronic illness. The framework describes three roles user may play in image based OHCs: explorer, sharer, and leader. The research addresses different factors that make images interesting for the chronically ill audience. The research has limitations related samples’ sizes and diversity. While the sample was based on two chronic illnesses: type one diabetes and multiple sclerosis, future works should include larger diverse samples with new techniques to improve measurement of user engagement.
  • Item
    Thumbnail Image
    Decentralised Intrusion Detection in Multi-Access Edge Computing
    Sharma, Rahul ( 2023-03)
    With the advent of Fifth Generation (5G) mobile networks, a diverse range of new computer networking technologies are being devised to meet the stringent demands of applications that require ultra-low latency, high bandwidth and geolocation-based services. Multi-Access Edge Computing (MEC) is a prominent example of such an emerging technology, which provides cloud computing services at the edge of the network using mobile base stations. This architectural shift of services from centralised cloud data centers to the network edge, helps reduce bandwidth usage and improve response time, meeting the ultra-low latency requirements laid out for 5G. However, MEC also inherits some of the security vulnerabilities affecting traditional networks and cloud computing, such as coordinated cyber attacks. This highlights a clear need for security mechanisms like Intrusion Detection Systems (IDS), specifically Collaborative Intrusion Detection Systems (CIDS), which have proven to be effective in identifying attacks spread across multiple locations. However, identifying the right CIDS model for MEC is not straightforward due to the tradeoff between different factors such as detection accuracy, network overhead, computation and memory overhead. Most CIDS solutions for MEC use a cloud-based backend for offloading their heavy data processing tasks, introducing latency. Intrusion Detection becomes even more convoluted when modern security layers like zero trust are added onto MEC environments. These challenges highlight the need for purely edge-based CIDS architectures and mechanisms for MEC that can integrate well with modern security layers, and is an area that is yet to be explored. This thesis addresses these challenges by first introducing a practical application use-case scenario using a car parking application running in MEC clusters. We then outline different attack patterns such as volumetric as well as stealthy attacks to compromise our application. This sets the base foundation for our subsequent chapters, where we propose architectures and techniques to detect these attacks in a MEC environment. We then discuss the characteristics relevant for using different CIDS deployment models like a Centralised CIDS and a Distributed CIDS using Distributed Hash Tables (DHT) in a purely edge-based setting by evaluating them using a real-world worm dataset. Through experimentation, our results outline the trade-offs of these different edge-based CIDS architectures and highlight potential issues such as the DHT being prone to a memory bottleneck when attacks are focused on one or few nodes alone, which also limits its detection accuracy. In contrast, the Centralised CIDS provides a high detection accuracy while having a central point of failure while limiting scalability. To address these bottlenecks identified in the Centralised as well as the Distributed CIDS using DHT, we propose a Hybrid CIDS. It combines the functionalities of the Centralised CIDS to perform attack detection in localised MEC clusters and the Distributed CIDS using DHT to share relevant focused context globally across multiple clusters. We evaluated its performance using a real-world worm dataset. Our results demonstrate that the Hybrid CIDS can detect distributed as well as focused attacks with a high detection accuracy, while removing the central point of failure in the system. It also addresses the memory bottleneck of the DHT by controlling the volume of data ingested into the DHT network using an exponential increase of threshold mechanism. This limits the data storage needs without compromising the detection accuracy of the global system. Finally, we discuss the addition of modern security layers like zero trust in MEC environments and the potential gaps in its security posture. We show that malicious traffic passing the authentication and authorisation controls of zero trust setups could compromise services in a stealthy manner. Since MEC is a platform for third-party application developers to deploy their applications at scale, a vulnerability in one application or system setup could be replicated across multiple clusters easily. To detect such malicious behavior in zero trust enabled MEC at scale, we propose a tree-based probabilistic CIDS architecture called Prob CIDS. This architecture uses probabilistic dissemination of alerts based on the severity of the event, so that low severity events can still be correlated without overwhelming the CIDS. We evaluated its performance using telemetry data generated from a real-world application deployed across multiple zero trust based clusters. Our results demonstrate that Prob CIDS has a high detection accuracy in detecting both volumetric as well as stealthy attacks that pass the security controls of zero trust, as compared to other CIDS architectures. We also demonstrate that features like the damping factor used in Prob CIDS can address the memory bottleneck of DHTs effectively, while reducing false positives in a drifting traffic scenario.
  • Item
    Thumbnail Image
    Table Semantic Learning for Chemical Patents
    Zhai, Zenan ( 2023-03)
    New chemical compounds discovered in commercial research are usually first disclosed in patents. Only a small fraction of these new compounds will appear in scientific literature, and only after a lengthy delay of on average 1-3 years after disclosure in patents. This implies that chemical patents are crucial and timely resources for novelty checking, validation, and understanding compound prior art. Hence, patents are an important knowledge resource for researchers in industry and academia. Natural Language Processing (NLP) is developing rapidly and has shown substantial performance on a wide range of information extraction tasks. However, the NLP community mainly focuses on unstructured text in the general domain. There is still a lack of datasets and information extraction methods focused on processing semi-structured texts and chemical patents. In this thesis, we focus on improving automatic table semantic learning performance for chemical patents. Most modern NLP methods use pre-trained word embeddings as part of their inputs. It has been shown that word embeddings pre-trained on in-domain data can help improve the performance of models that take them as inputs. Hence, we start with laying the foundation for the evaluation of table semantic learning models on chemical patents by pre-training word embeddings with in-domain data. Our experiments on a collection of chemical patent datasets show that the use of the created embeddings can help improve performance on named-entity recognition, co-reference resolution, and table semantic classification tasks. Next, to address the lack of training data, we present a new dataset for the semantic classification task in chemical patents. The baseline results generated by existing table semantic learning methods show that neural machine learning models are better than non-neural baselines. However, these approaches sacrifice either the 2-D structure of tables or sequential information between cells. Finally, we propose a novel approach that addresses this limitation. The proposed method adopts a novel quad-directional recurrent layer for capturing sequential information between neighboring cells in both vertical and horizontal directions. We then combine it with an image processing model based on a convolutional neural network that captures regional features in the 2D structure. We show that the proposed methods perform better than existing methods on the semantic classification of chemical patent tables. To further show the efficacy of the model, we adapt it to the table cell-level syntactic classification task. We show that the proposed model achieved substantial performance on a novel web table dataset we created for this task.
  • Item
    Thumbnail Image
    Adapting Clinical Natural Language Processing to Contexts: Task, Framework, and Data Bias
    Liu, Jinghui ( 2023-04)
    Clinical texts contain rich amounts of valuable information about real-world patients and clinical practices that can be utilized to improve clinical care. Mining information from clinical text through Natural Language Processing is a promising research field and has attracted much attention. Recent NLP approaches for clinical text usually treat clinical texts as mere corpora from just “another” textual domain. However, clinical text is generated to serve multiple purposes in the healthcare setting and encodes variations and biases from clinical practice that are often not obvious to NLP researchers. This leads to three types of unsatisfactory applications of clinical NLP. First, some clinical NLP tasks provide solutions with limited applicability to existing clinical decision-making and clinical workflow, and they often tend to target individual patients instead of a patient cohort. Second, the output of many clinical NLP models is often a single number or label, presenting a framework that tends to replace instead of augmenting clinical reasoning in the care process. Third, most recent clinical NLP systems are trained end-to-end to manage the complexity of human language, which neglects the various biases that exist in clinical text. This thesis aims to address these three aspects of clinical NLP through three case studies, which include 1) proposing a prediction task to support clinical resource management at the cohort level, 2) examining the feasibility of patient retrieval as supplementary output for predictive analysis, and 3) evaluating the impact of clinical documentation practices on NLP modeling. The results of these studies demonstrate the importance of taking the clinical context into consideration when designing tasks, developing models, and preparing data for effective and reliable clinical NLP systems.
  • Item
    Thumbnail Image
    Microservices-based Internet of Things Applications Placement in Fog Computing Environments
    Pallewatta, Pallewatta Kankanamge Samodha Kanchani ( 2023-02)
    The Internet of Things (IoT) paradigm is rapidly improving various application domains such as healthcare, smart city, Industrial IoT (IIoT), and intelligent transportation by interweaving sensors, actuators and data analytics platforms to create smart environments. Initially, the cloud-centric IoT was introduced as a viable solution for processing and storing massive amounts of data generated by IoT devices. However, with rapidly increasing data volumes, data transmission from geo-distributed IoT devices to the centralised Cloud incurs high network congestion and high latency. Thus, cloud-centric IoT often fails to satisfy the Quality of Service (QoS) requirements of latency-sensitive and bandwidth-hungry IoT application services. Fog computing paradigm extends cloud-like services towards the edge of the network, thus offering low latency service delivery. However, Fog nodes are distributed, heterogeneous and resource-constrained, creating the need to utilise both Fog and Cloud resources to execute IoT applications in a QoS-aware manner. Meanwhile, MicroService Architecture (MSA) has emerged as a powerful application architecture capable of satisfying the development and deployment needs of rapidly evolving IoT applications. The fine-grained modularity of microservices, their independently deployable and scalable nature, along with the lack of centralised management, demonstrate immense potential in harnessing the power of distributed Fog and Cloud resources to meet the QoS requirements of IoT applications. Furthermore, the loosely coupled nature of microservices enables the dynamic composition of distributed microservices to achieve diverse performance requirements of IoT applications while utilising distributed computing resources. To this end, efficient placement of microservices plays a vital role, and scalable placement techniques can use MSA characteristics to harvest the full potential of the Fog computing paradigm. This thesis investigates novel placement techniques and systems for microservices-based IoT applications in Fog computing environments. Proposed approaches identify MSA characteristics to overcome challenges within the Fog computing environments and make use of them to fulfil heterogeneous QoS requirements of IoT application services in terms of service latency, budget, throughput and reliability while utilising Fog and Cloud resources in a balanced manner. This thesis advances the state-of-the-art in Fog computing by making the following key contributions: 1. A comprehensive taxonomy and literature review on the placement of microservices-based IoT applications considering different aspects, namely modelling microservices-based applications, creating application placement policies, microservice composition, and performance evaluation, in Fog computing environments. 2. A distributed placement technique for scalable deployment of microservices to minimise the latency of the application services and network usage due to IoT data transmission. 3. A robust placement technique for batch placement of microservices-based IoT applications, where the technique considers the placement of a set of applications simultaneously to optimise the QoS satisfaction of application services in terms of makespan, budget and throughput while dynamically utilising Fog and Cloud resources. 4. A reliability-aware placement technique for proactive redundant placement of microservices to improve reliability satisfaction in a throughput and cost-aware manner. 5. A software framework for microservices-based IoT application placement and dynamic composition across federated Fog and Cloud computing environments.
  • Item
    Thumbnail Image
    Differentially Private Data Analysis
    Wu, Hao ( 2023-05)
    As personal data records are collected with unprecedented scale, privacy leakage can affect millions of people. The public is becoming more aware of privacy protection, and governments are enforcing increasingly stringent regulations. How can we design algorithms that learn meaningful information without disclosing sensitive personal information? Differential privacy is emerging as the de facto standard for data analysis. The model ensures that the algorithms’ output varies little with individual data changes, so it is difficult to infer personal information. On the other hand, the model enables the algorithms to preserve aggregated information of the population, admitting learning overall statistics. This thesis aims to design effective and efficient data analysis algorithms under the differential privacy model, with a focus on providing solid mathematical guarantees in two key aspects: privacy-utility trade-offs and algorithmic efficiency. Through this research, we have addressed three open research questions in this field. Firstly, we investigate whether it is possible to construct frequency oracle and succinct histogram algorithms in the local model of differential privacy that can achieve an asymptotically optimal estimation error without relying on error-correcting codes for the frequency estimation problem. Secondly, we explore the possibility of constructing frequency tracking algorithms whose estimation errors scale sub-linearly with respect to the number of data changes under the local model of differential privacy. Finally, we investigate whether it is feasible to construct algorithms for the differentially private top-k selection problem, with sub-linear number of data accesses on an existing data management system.
  • Item
    Thumbnail Image
    Mitigating the risk of knowledge leakage in knowledge intensive organizations: a mobile device perspective
    Agudelo Serna, Carlos Andres ( 2023-02)
    In the current knowledge economy, knowledge represents the most strategically significant resource of organizations. Knowledge-intensive activities advance innovation and create and sustain economic rent and competitive advantage. In order to sustain competitive advantage, organizations must protect knowledge from leakage to third parties, particularly competitors. However, the number and scale of leakage incidents reported in news media as well as industry whitepapers suggests that modern organizations struggle with the protection of sensitive data and organizational knowledge. The increasing use of mobile devices and technologies by knowledge workers across the organizational perimeter has dramatically increased the attack surface of organizations, and the corresponding level of risk exposure. While much of the literature has focused on technology risks that lead to information leakage, human risks that lead to knowledge leakage are relatively understudied. Further, not much is known about strategies to mitigate the risk of knowledge leakage using mobile devices – especially considering the human aspect. Specifically, this research study identified three gaps in the current literature (1) lack of in-depth studies that provide specific strategies for knowledge-intensive organizations based on their varied risk levels. Most of the analysed studies provide high-level strategies that are presented in a generalised manner and fail to identify specific strategies for different organizations and risk levels. (2) lack of research into management of knowledge in the context of mobile devices,. And (3) lack of research into the tacit dimension of knowledge as the majority of the literature focuses on formal and informal strategies to protect explicit (codified) knowledge. To address the aforementioned gaps, this research study adopted an exploratory and managerial practice-based perspective to investigate how knowledge intensive organizations manage their risk of knowledge leakage caused by the use of mobile devices. Hence the main research question: How can knowledge intensive (KI) organizations mitigate the knowledge leakage risk (KLR) caused by the use of mobile devices? To answer the primary research question, the following secondary questions are also addressed: 1. What strategies are used by knowledge-intensive organizations to mitigate the risk of knowledge leakage (KLR) caused by the use of mobile devices? 2. How does the perceived KLR level inform the strategies used by KI organizations? 3. What knowledge assets do knowledge intensive organizations protect from KL? The main contribution of this research study is the development of a theory-informed and empirically grounded classification framework that guides organizations in mitigating their leakage risk and improving their knowledge protection capabilities. The framework was developed through the application of a research model that was informed by a comprehensive review of the relevant literature to identify the key concepts and factors that were relevant to the research aims and questions. These concepts and factors were then organized into a conceptual research model, which served as the foundation for the classification framework. The initial development of the framework was based on theory, i.e., the knowledge-based view of the firm, and incorporated components from the mobile computing literature specifically the mobile usage contexts extending from the social context, interaction framework model of context and the Integrative model of IT business value framework. The mobile usage contexts were grouped into human, enterprise, and technological factors. The research study collected qualitative data from twenty knowledge and information security professionals in managerial and executive positions from different knowledge intensive organizations within Australia which had sanctioned mobile device policies in place. The data was collected through semi-structured interviews and supplementary documentation to improve data triangulation and increase the reliability and validity of the findings. The data collection process followed the Gioia methodology that required continuous data comparison involving simultaneous data analysis and exploration. Based on the findings from the data analysis, a set of strategies were developed and organized into a hierarchical structure to form the classification framework. These constructs were arranged based on their relevance and importance to the research question, and their ability to capture the key concepts and factors identified in the conceptual research model. After this, the collected data informed the further development and extension of the initial conceptual framework into a classification scheme of organizational strategies directed toward the protection of organizational knowledge and leakage mitigation mechanisms followed by knowledge intensive organizations based on the nature of the knowledge (tacit vs explicit) and risk level. This study's findings also contributed to the current literature on knowledge management and knowledge protection literature: 1. By providing a synthesis of specific mitigation strategies and tactics that knowledge intensive organizations can implement categorized into enterprise, human and technological factors. 2. By proposing a classification scheme that was built on a research framework grounded on the information security, knowledge management, knowledge protection, and mobile computing literature and that can be extended to further investigate the leakage phenomenon. 3. By presenting a combination of more innovative approaches from other domains that address tacit knowledge as highlighted from the evidence. 4. By providing the adaptation of several strategies from the information security literature into the knowledge protection literature, such as zero trust, deception, active defence, active reconnaissance, and behaviour analytics. 5. By presenting protection strategies directly targeting mobility, i.e., mobile workers and mobile devices.
  • Item
    Thumbnail Image
    Use of Reinforcement Learning in Self-tuning Physical Database Design Structures under Dynamic and Unexplored Workloads
    Perera, Warnakula Patabandige Romesh Malinga ( 2023-02)
    Automating physical database design has remained a long-term interest in database research due to substantial performance gains afforded by optimised structures. However, despite significant progress, most of today's commercial solutions are highly manual, requiring offline invocation by database administrators (DBAs) who are expected to identify and supply representative training workloads. This status quo is untenable: identifying representative static workloads is no longer realistic, and physical design tools remain susceptible to the query optimiser's cost misestimates. We propose a self-driving approach to online physical design structure (PDS) selection that eschews the DBA and query optimiser and learns the benefits of viable structures through strategic exploration and direct performance observation. We view the problem as one of sequential decision-making under uncertainty, specifically within the bandit learning setting. Multi-armed bandits (MABs) balance exploration and exploitation to provably guarantee an average performance that converges to policies that are optimal with perfect hindsight. In this thesis, we first focus on the narrowed scope of index selection under analytical workloads. We present a simplified bandit framework that effectively tunes indices outperforming a state-of-the-art commercial tuning tool with a 75% performance gain on shifting and ad-hoc workloads and a 28% performance gain on static workloads. Furthermore, our bandit framework outperforms a deep reinforcement learning (RL) solution in convergence speed and performance volatility and provides up to 58% performance gain. We extend the bandit framework to incorporate hybrid transactional and analytical processing (HTAP) workloads. HTAP environments are especially challenging for index tuning, as the bandit must consider possible performance regression on online transaction processing (OLTP) workloads. In HTAP environments, our solution provides up to 59% performance gain on shifting and 51% gain on static workloads. Finally, we consider selecting several physical design structures (PDS), such as indices and materialised views, whose combination influences overall system performance in a non-linear manner. While the simplicity of combining the results of iterative searches for individual PDSs may be appealing, such a greedy approach may yield vastly suboptimal results compared to an integrated search. We propose a new self-driving approach (HMAB) based on hierarchical multi-armed bandit learners, which can work in an integrated space of multiple PDS while avoiding the full cost of combinatorial search. While primarily depending on direct performance observations through a strategic exploration, HMAB carefully leverages optimiser knowledge to prune the less helpful exploration paths. To the best of our knowledge, HMAB is the first learned system to tune both indices and materialised views in an integrated manner. Our solution enjoys superior empirical performance relative to state-of-the-art commercial physical database design tools that search over the integrated space of materialised views and indices. Specifically, HMAB achieves up to 96% performance gain over a state-of-the-art commercial physical database design tool when running industrial benchmarks. Furthermore, we demonstrate HMAB's superiority over nine index-tuning solutions.
  • Item
    Thumbnail Image
    Incorporating structured and unstructured information for geospatial question answering
    Li, Haonan ( 2022-12)
    In daily life, people ask questions involving geographic entities or concepts; we call them geospatial questions. Automatically answering these questions is challenging for machines because of the difficulties in: (1) identifying geographic entities or concepts; (2) interpreting spatial and non-spatial constraints from questions; and (3) incorporating various knowledge sources for language processing and spatial reasoning. In this thesis, we aim to tackle these problems using deep learning and natural language processing techniques. We first investigate a fundamental task for geospatial language processing --- toponym (place name) detection and disambiguation. We propose an attention-based neural network model using character-level word representation for toponym detection. We demonstrate that character-level information is essential for toponym detection considering language irregularity (e.g., mis-spellings and abbreviations). After detecting the toponyms, we devise a feature-based model to assign each toponym a unique id (with geo-coordinates) from a gazetteer. During our investigation of the toponym detection approach, we found that about 20\% toponyms are used metonymically, where a toponym does not refer to a place but a closely related thing or concept. However, general toponym detection models do not distinguish them. We hypothesize that a good metonymy resolution model should benefit toponym detection and further benefit geospatial question answering. For metonymy resolution, we argue that whether a toponym (i.e., target word) refers to a place or not should be inferred from context rather than the toponym itself, and propose a pretrained language model (PLM)-based target word masking approach which achieves the state-of-the-art over five datasets. We further verify our hypothesis by integrating the proposed metonymy resolution model into an end-to-end toponym resolution model, and using the toponym resolution model to tag the questions and related passages in later question answering (QA) tasks. For geospatial question answering, we argue that different question answering systems can be built based on different knowledge sources, including unstructured web documents and structured knowledge bases. We investigate the current state of geospatial QA systems using different knowledge sources. The lack of resources is the main bottleneck for unstructured knowledge source-based QA (i.e., IR-based QA). We find that there is no IR-based geospatial QA dataset, and existing open-domain datasets do not adequately cover the difficulties of geospatial questions, especially questions that have to be answered with multiple spans extracted from a document. To bridge this gap, we propose a new reading comprehension task with a dataset that enables models to predict multi-span answers. We demonstrate that a neural network model trained on our dataset can effectively answer multi-span questions, including but not limited to questions in the geospatial domain. For structured KB-based QA, we propose a pipelined method consisting of three steps: (1) detecting geospatial semantic concepts from questions; (2) modeling geospatial relationships between the concepts into a semantic graph; and (3) translating the semantic graph into a knowledge base query. We use a neural sequence tagger for the first step and a neural semantic parser for the second step. Experimental results show that neural network-based approaches are better at capturing semantic features and result in better question answering systems. We finally argue that structured and unstructured knowledge sources can complement each other, and more complex geospatial questions can be answered by incorporating different knowledge sources. We propose a model to integrate structured geo-coordinates and unstructured place descriptions to represent a place, and demonstrate that the model combining multiple knowledge sources outperforms models utilizing a single knowledge source in a real-world Point-of-Interest (POI) recommendation QA task. Overall, this thesis contributes to geospatial question answering and a series of fundamental tasks relating to toponyms. We demonstrate that these tasks can be effectively and efficiently approached by applying advanced deep learning and natural language processing techniques. The proposed methods, datasets, and findings can be used or built on in future geospatial question answering research.
  • Item
    Thumbnail Image
    Scalable and Explainable Time Series Classification
    Cabello Wilson, Nestor Stiven ( 2022)
    Time series data are ubiquitous, and the explosion of sensor technologies on industry applications has further accelerated their growth. Modern applications collecting large datasets with long time series require fast automated decisions. Besides, legislation such as the European General Data Protection Regulation now require explanations for automated decisions. Although state-of-the-art time series classification methods are highly accurate, their computationally expensive and complex models are typically impractical for large datasets or cannot provide explanations. To address these issues, this thesis proposes two time series classification methods that are comparable to state-of-the-art methods in terms of accuracy, while the two methods provide scalable and explainable classifications. Our first method proposes a novel supervised selection of sub-series to pre-compute a set of features that maximizes the classification accuracy when fed into a tree-based ensemble. Our second method further proposes a perturbation scheme for the supervised feature selection and the node-splitting process when training the tree-based ensemble. We also propose a highly time-efficient strategy to build the tree-based ensemble. Both methods enable explainability for our classification results, while they are orders of magnitude faster than the state-of-the-art methods. Our second method, in particular, is significantly faster and more accurate than our first approach, while it is not significantly different from the state-of-the-art methods in terms of classification accuracy. Moreover, motivated to explore a more general model for time series classification, we propose a novel graph-based method to learn to classify time series without the order constraint inherent to time series data. This method classifies time series by learning the relationships among the data points independently from their positions (i.e., time-stamps) within the series. We show that this method outperforms state-of-the-art methods over several time series datasets, thus opening up a new direction for the design of time series classifiers.