Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Workflow Scheduling in Cloud and Edge Computing Environments with Deep Reinforcement Learning
    Jayanetti, Jayanetti Arachchige Amanda Manomi ( 2023-08)
    Cloud computing has firmly established itself as a mandatory platform for delivering computing services over the internet in an efficient manner. More recently, novel computing paradigms such as edge computing have also emerged to complement the traditional cloud computing paradigm. Owing to the multitude of benefits offered by cloud and edge computing environments, these platforms are increasingly used for the execution of workflows. The problem of scheduling workflows in a distributed system is NP-Hard in the general case. Scheduling workflows across highly dynamic cloud and edge computing environments is even more complex due to inherent challenges associated with these environments including the need to satisfy diverse contradictory objectives, coordinating executions across highly distributed infrastructures and dynamicity of the operating conditions. These requirements collectively give rise to the need for adaptive workflow scheduling algorithms that are capable of satisfying diverse optimization goals amid highly dynamic conditions. Deep Reinforcement Learning (DRL) has emerged as a promising paradigm for dealing with highly dynamic and complex problems due to the ability of DRL agents to learn to operate in stochastic environments. Despite the benefits of DRL, there are multiple challenges associated with the application of DRL techniques including multi-objectivity, curse of dimensionality, partial observability and multi-agent coordination. In this thesis, we propose novel DRL algorithms and architectures to efficiently overcome these challenges.
  • Item
    Thumbnail Image
    Cost-efficient Management of Cloud Resources for Big Data Applications
    Islam, Muhammed Tawfiqul ( 2020)
    Analyzing a vast amount of business and user data on big data analytics frameworks is becoming a common practice in organizations to get a competitive advantage. These frameworks are usually deployed in a computing cluster to meet the analytics demands in every major domain, including business, government, financial markets, and health care. However, buying and maintaining a massive amount of on-premise resources is costly and difficult, especially for start-ups and small business organizations. Cloud computing provides infrastructure, platform, and software systems for storing and processing data. Thus, Cloud resources can be utilized to set up a cluster with a required big data processing framework. However, several challenges need to be addressed for Cloud-based big data processing which includes: deciding how much Cloud resources are needed for each application, how to maximize the utilization of these resources to improve applications' performance, and how to reduce the monetary cost of resource usages. In this thesis, we focus on a user-centric view, where a user can be either an individual or a small/medium business organization who want to deploy a big data processing framework on the Cloud. We explore how resource management techniques can be tailored to various user-demands such as performance improvement, and deadline guarantee for the applications; all while reducing the monetary cost of using the cluster. In particular, we propose efficient resource allocation and scheduling mechanisms for Cloud-deployed Apache Spark clusters.