Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    Thumbnail Image
    Brownout-oriented and energy efficient management of cloud data centers
    Xu, Minxian ( 2018)
    Cloud computing paradigm supports dynamic provisioning of resources for delivering computing for applications as utility services as a pay-as-you-go basis. However, the energy consumption of cloud data centers has become a major concern as a typical data center can consume as much energy as 25,000 households. The dominant energy efficient approaches, like Dynamic Voltage Frequency Scaling and VM consolidation, cannot function well when the whole data center is overloaded. Therefore, a novel paradigm called brownout has been proposed, which can dynamically activate/deactivate the optional parts of the application system. Brownout has successfully shown it can avoid overloads due to changes in the workload and achieve better load balancing and energy saving effects. In this thesis, we propose brownout-based approaches to address energy efficiency and cost-aware problem, and to facilitate resource management in cloud data centers. They are able to reduce data center energy consumption while ensuring Service Level Agreement defined by service providers. Specifically, the thesis advances the state-of-art by making the following key contributions: 1) An approach for scheduling cloud application components with brownout. The approach models the brownout enabled system by considering application components, which are either mandatory or optional. It also contains brownout-based algorithm to determine when to use brownout and how much utilization can be reduced. 2) A resource scheduling algorithm based on brownout and approximate Markov Decision Process approach. The approach considers the trade-offs between saved energy and the discount that is given to the user if components or microservices are deactivated. 3) A framework that enables brownout paradigm to manage the container-based environment, and provides fine-grained control on containers, which also contains several scheduling policies for managing containers to achieve power saving and QoS constraints. 4) The design and development of a software prototype based on Docker Swarm to reduce energy consumption while ensuring QoS in Clouds, and evaluations of different container scheduling policies under real testbeds to help service provider deploying services in a more energy-efficient manner while ensuring QoS constraint. 5) A perspective model for multi-level resource scheduling and a self-adaptive approach for interactive workloads and batch workloads to ensure their QoS by considering the renewable energy at Melbourne based on support vector machine. The proposed approach is evaluated under our developed prototype system.
  • Item
    Thumbnail Image
    Cost-efficient resource provisioning for large-scale graph processing systems in cloud computing environments
    Heidari, Safiollah ( 2018)
    A large amount of data that is being generated on Internet every day is in the form of graphs. Many services and applications namely as social networks, Internet of Things (IoT), mobile applications, business applications, etc. in which every data entity can be considered as a vertex and the relationships between entities shape the edges of a graph, are in this category. Since 2010, exclusive large-scale graph processing frameworks are being developed to overcome the inefficiency of traditional processing solutions such as MapReduce. However, most frameworks are designed to be employed on high performance computing (HPC) clusters which are only available to whom can afford such infrastructure. Cloud computing is a new computing paradigm that offers unprecedented features such as scalability, elasticity and pay-as-you-go billing model and is accessible to everyone. Nevertheless, the advantages that cloud computing can bring to the architecture of large-scale graph processing systems are less studied. Resource provisioning and management is a critical part of any processing system in cloud environments. To provide the optimized amount of resources for a particular operation, several factors such as monetary cost, throughput, scalability, network performance, etc. can be taken into consideration. In this thesis, we investigate and propose novel solutions and algorithms for cost-efficient resource provisioning for large-scale graph processing systems. The outcome is a series of research works that increase the performance of such processing by making it aware of the operating environment while decreasing the dollar cost significantly. We have particularly made the following contributions: 1. We introduced iGiraph, a cost-efficient framework for processing large-scale graphs on public clouds. iGiraph also provides a new graph algorithm categorization and processes the graph accordingly. 2. To demonstrate the impact of network on the processing in cloud environment, we developed two network-aware algorithms that utilize network factors such as traffic, bandwidth and also the computation power. 3. We developed an auto-scaling technique to take advantage of resource heterogeneity on clouds. 4. We introduced a large-scale graph processing service for clouds where we consider the service level agreement (SLA) requirements in the operations. The service can handle multiple processing requests by its new prioritization and provisioning approach.
  • Item
    Thumbnail Image
    Integrated provisioning of compute and network resources in Software-Defined Cloud Data Centers
    Son, Jungmin ( 2018)
    Software-Defined Networking (SDN) has opened up new opportunities in networking technology with its decoupled concept of the control plane from the packet forwarding hardware, which enabled the network to be programmable and configurable dynamically through the centralized controller. Cloud computing has been empowered with the adoption of SDN for infrastructure management in a data center where dynamic controllability is indispensable in order to provide elastic services. The integrated provisioning of compute and network resources enabled by SDN is essential in clouds to enforce reasonable Service Level Agreements (SLAs) stating the Quality of Service (QoS) while saving energy consumption and resource wastage. This thesis presents the joint compute and network resource provisioning in SDN-enabled cloud data center for QoS fulfillment and energy efficiency. It focuses on the techniques for allocating virtual machines and networks on physical hosts and switches considering SLA, QoS, and energy efficiency aspects. The thesis advances the state-of-the-art with the following key contributions: 1. A taxonomy and survey of the current research on SDN-enabled cloud computing, including the state-of-the-art joint resource provisioning methods and system architectures. 2. A modeling and simulation environment for SDN-enabled cloud data centers abstracting functionalities and behaviors of virtual and physical resources. 3. A novel dynamic overbooking algorithm for energy efficiency and SLA enforcement with the migration of virtual machines and network flows. 4. A QoS-aware computing and networking resource allocation algorithm based on the application priority to fulfill different QoS requirements. 5. A prototype system of the integrated control platform for joint management of cloud and network resources simultaneously based on OpenStack and OpenDaylight.