Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 7 of 7
  • Item
    Thumbnail Image
    Understanding how cloud computing enables business model innovation in start-up companies
    Alrokayan, Mohammed ( 2017)
    Start-up companies contribute significantly to the national economies of many countries but their failure rate is notably high. Successful start-ups typically depend on innovative business models to be competitive and maintain profitability. This thesis explores how the new technologies of cloud computing might enable start-ups to create and maintain competitive advantage. A conceptual framework called Cloud-Enabled Business Model Innovation (CEBMI) is presented that identifies three research questions concerning how cloud computing might enable business model innovation, what form this innovation takes, and how the innovation leads to competitive advantage. These questions were then investigated through three empirical studies involving six case studies with start-ups and two qualitative studies involving interviews with 11 business consultants and three cloud service providers. The detailed findings are presented as a set of key propositions that offer answers to the research questions, and together sketch a view of how CEBMI might enable start-ups to achieve competitive advantage.
  • Item
    Thumbnail Image
    Energy and carbon-efficient resource management in geographically distributed cloud data centers
    Khosravi, Atefeh ( 2017)
    Cloud computing provides on-demand access to computing resources for users across the world. It offers services on a pay-as-you-go model through data center sites that are scattered across diverse geographies. However, cloud data centers consume huge amount of electricity and leave high amount of carbon footprint in the ecosystem. This makes data centers responsible for 2% of the global CO2 emission, the same as the aviation industry. Therefore, having energy and carbon-efficient techniques for distributed cloud data centers is inevitable. Cloud providers while efficiently allocating computing resources to users, should also meet their required quality of service. The main objective of this thesis is to address the problem of energy and carbon efficient resource management in geographically distributed cloud data centers. It focuses on the techniques for VM placement, investigates the parameters with largest effect on the energy and carbon cost, migration of VMs between data center sites to harvest renewable energy sources, and prediction of renewable energy to maximize its usage. The key contributions of this thesis are as follows: (1) A VM placement algorithm to optimally select the data center and server to reduce energy consumption and carbon footprint with considering energy and carbon related parameters. (2) A dynamic method for the initial placement of VMs in geographically distributed cloud data centers that simultaneously considers energy and carbon cost and maximizes renewable energy utilization at each data center to minimize the total cost. (3) Variations of VM placement methods, which explore the effects of different parameters in minimizing energy and carbon cost for a cloud computing environment. (4) The optimal offline algorithm and two online algorithms, which exploit available renewable energy levels across distributed data center sites for VM migration to minimize total energy cost and maximize renewable energy usage. (5) A prediction model for renewable energy availability at data center sites to incorporate into online VM migration algorithm and maximize renewable energy usage.
  • Item
    Thumbnail Image
    Auto-scaling and deployment of web applications in distributed computing clouds
    Qu, Chenhao ( 2016)
    Cloud Computing, which allows users to acquire/release resources based on real-time demand from large data centers in a pay-as-you-go model, has attracted considerable attention from the ICT industry. Many web application providers have moved or plan to move their applications to Cloud, as it enables them to focus on their core business by freeing them from the task and the cost of managing their data center infrastructures, which are often over-provisioned or under-provisioned under a dynamic workload. Applications these days commonly serve customers from geographically dispersed regions. Therefore, to meet the stringent Quality of Service (QoS) requirements, they have to be deployed in multiple data centers close to the end customer locations. However, efficiently utilizing Cloud resources to reach high cost-efficiency, low network latency, and high availability is a challenging task for web application providers, especially when the service provider intends to deploy the application in multiple geographical distributed Cloud data centers. The problems, including how to identify satisfactory Cloud offerings, how to choose geographical locations of data centers so that the network latency is minimized, how to provision the application with minimum cost incurred, and how to guarantee high availability under failures and flash crowds, should be addressed to enable QoS-aware and cost-efficient utilization of Cloud resources. In this thesis, we investigated techniques and solutions for these questions to help application providers to efficiently manage deployment and provision of their applications in distributed computing Clouds. It extended the state-of-the-art by making the following contributions: 1. A hierarchical fuzzy inference approach for identifying satisfactory Cloud services according to individual requirements. 2. Algorithms for selection of multi-Cloud data centers and deployment of applications on them to minimize Service Level Objective (SLO) violations for web applications requiring strong consistency. 3. An auto-scaler for web applications that achieves both high availability and significant cost saving by using heterogeneous spot instances. 4. An approach that mitigates the impact of short-term application overload caused by either resource failures or flash crowds in any individual data center through geographical load balancing.
  • Item
    Thumbnail Image
    Energy-efficient management of resources in container-based clouds
    Fotuhi Piraghaj, Sareh ( 2016)
    CLOUD enables access to a shared pool of virtual resources through Internet and its adoption rate is increasing because of its high availability, scalability and cost effectiveness. However, cloud data centers are one of the fastest-growing energy consumers and half of their energy consumption is wasted mostly because of inefficient allocation of the servers resources. Therefore, this thesis focuses on software level energy management techniques that are applicable to containerized cloud environments. Containerized clouds are studied as containers are increasingly gaining popularity. And containers are going to be major deployment model in cloud environments. The main objective of this thesis is to propose an architecture and algorithms to minimize the data center energy consumption while maintaining the required Quality of Service (QoS). The objective is addressed through improvements in the resource utilization both on server and virtual machine level. We investigated the two possibilities of minimizing energy consumption in a containerized cloud environment, namely the VM sizing and container consolidation. The key contributions of this thesis are as follows: 1. A taxonomy and survey of energy-efficient resource management techniques in PaaS and CaaS environments. 2. A novel architecture for virtual machine customization and task mapping in a containerized cloud environment. 3. An efficient VM sizing technique for hosting containers and investigation of the impact of workload characterization on the efficiency of the determined VM sizes. 4. A design and implementation of a simulation toolkit that enables modeling for containerized cloud environments. 5. A framework for dynamic consolidation of containers and a novel correlation-aware container consolidation algorithm. 6. A detailed comparison of energy efficiency of container consolidation algorithms with traditional virtual machine consolidation for containerized cloud environments.
  • Item
    Thumbnail Image
    Resource provisioning and scheduling algorithms for scientific workflows in cloud computing environments
    Rodriguez Sossa, Maria Alejandra ( 2016)
    Scientific workflows describe a series of computations that enable the analysis of data in a structured and distributed manner. Their importance is exacerbated in todays big data era as they become a compelling mean to process and extract knowledge from the ever-growing data produced by increasingly powerful tools such as telescopes, particle accelerators, and gravitational wave detectors. Due to their large-scale nature, scheduling algorithms are key to efficiently automate their execution in distributed environments, and as a result, to facilitate and accelerate the pace of scientific progress. The emergence of the latest distributed system paradigm, cloud computing, brings with it tremendous opportunities to run workflows at low costs without the need of owning any infrastructure. In particular, Infrastructure as a Service (IaaS) clouds, offer an easily accessible, flexible, and scalable infrastructure for the deployment of these scientific applications by providing access to a virtually infinite pool of resources that can be acquired, configured, and used as needed and are charged on a pay-per-use basis. This thesis investigates novel resource provisioning and scheduling approaches for scientific workflows in IaaS clouds. They address fundamental challenges that arise from the multi-tenant, resource-abundant, and elastic resource model and are capable of fulfilling a set of quality of service requirements expressed in terms of execution time and cost. It advances the field by making the following key contributions: 1. A taxonomy and survey of the state-of-the-art scientific workflow scheduling algorithms designed exclusively for IaaS clouds. 
 2. A novel static scheduling algorithm that leverages Particle Swarm Optimization to generate a workflow execution and resource provisioning plan that minimizes the infrastructure cost while meeting a deadline constraint. 
 3. A hybrid algorithm based on a variation of the Unbounded Knapsack Problem that finds a trade-off between making static decisions to find better-quality schedules and dynamic decisions to adapt to unexpected delays. 
 4. A scalable algorithm that combines heuristics and two different Integer Programming models to generate schedules that minimize the execution time of the work- flow while meeting a budget constraint. 
 5. The implementation of a cloud resource management module and its integration to an existing Workflow Management System. 

  • Item
    Thumbnail Image
    Service value in business-to-business cloud computing
    PADILLA, ROLAND ( 2014)
    This thesis is concerned with determining and measuring the components of service value in the business-to-business cloud computing context. Although service value measurement and its perceptions have been identified as key issues for researchers and practitioners, theoretical and empirical studies have experienced great challenges in measuring perceptions of service value in numerous business contexts. The thesis first determines the components of service value and then measures the service value perceptions of users in a business-to-business context of cloud computing. In this thesis, I: • undertook qualitative in-depth interviews (N=21) of managers who are responsible for deciding on the adoption and maintenance of cloud computing services. Two key findings of the interviews are that the four components of an established service value model in a business-to-consumer setting are appropriate in a business-to-business context of cloud computing and found evidence that an additional component, which we call cloud service governance, applies and does not fit the existing four components; • conducted a survey (N=328) of cloud computing practitioners to demonstrate that the findings from the qualitative in-depth interviews are generalisable to a number of industry sectors and across geographical locations; • assessed the measurement models, comprising both reflective and formative, and structural model by using partial least squares structural equation modeling, and provided evidence of specifying Service Value as a formative second-order hierarchical latent variable by using a sequential latent variable score method; • demonstrated that Service Equity is not a statistically significant component of service value in the first-order model, Service Quality is consistently significant for both first-order model and second-order, formative model, and the additional construct called Cloud Service Governance is significant; and, • for the first time, fully tested a reliable service value instrument for use by the customers of cloud computing, and aiming to engage cloud service providers in order to enhance customer satisfaction and increase repurchase intentions.
  • Item
    Thumbnail Image
    Resource provisioning in spot market-based cloud computing environments
    VOORSLUYS, WILLIAM ( 2014)
    Recently, cloud computing providers have started offering unused computational resources in the form of dynamically priced virtual machines (VMs), also known as "spot instances". In spite of the apparent economical advantage, an intermittent nature is inherent to these biddable resources, which may cause VM unavailability. When an out-of-bid situation occurs, i.e. the current spot price goes above the user's maximum bid, spot instances are terminated by the provider without prior notice. This thesis presents a study on employing cloud computing spot instances as a means of executing computational jobs on cloud computing resources. We start by proposing a resource management and job scheduling policy, named SpotRMS, which addresses the problem of running deadline-constrained compute-intensive jobs on a pool of low-cost spot instances, while also exploiting variations in price and performance to run applications in a fast and economical way. This policy relies on job runtime estimations to decide what are the best types of spot instances to run each job and when jobs should run. It is able to minimise monetary spending and make sure jobs finish within their deadlines. We also propose an improvement for SpotRMS, that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault tolerance techniques, namely checkpointing, task duplication and migration. As a further improvement, we equip SpotRMS with prediction-assisted resource provisioning and bidding strategies. Our results demonstrate that both costs savings and strict adherence to deadlines can be achieved when properly combining and tuning the policy mechanisms. Especially, the fault tolerance mechanism that employs migration of VM state provides superior results in virtually all metrics. Finally, we employ a statistical model of spot price dynamics to artificially generate price patterns of varying volatility. We then analyse how SpotRMS performs in environments with highly variable price levels and more frequent changes. Fault tolerance is shown to be even more crucial in such scenarios.