Cloud service providers use the concept of "burstable performance instance" that can temporally ramp up its performance to handle bursty workloads by utilizing spare resources. The state-of-the-practice to using the available burst capacity is independent of the workload, which results in squandering spare resources. In this work, we quantify and optimize the efficiency of using burst capacity so that it benefits both cloud service providers and end users. More specifically, we use a throttling mechanism as a control knob to continuously adapt the amount of spare resources based on workload characteristics such as traffic intensity. To identify optimal throttling, we integrate lightweight profiling and quantile regression in a synergistic way and build a prediction model that accurately predicts tail latency. We build an autonomic scheduling framework called CEDULE that can make adaptive scheduling decisions to maximize the efficiency of spare resources while achieving user defined SLOs. We conduct extensive experimental evaluations of the proposed scheduling framework on Amazon EC2 using popular benchmark applications, such as Sysbench, YCSB, and TPC-W. Experimental results demonstrate the high accuracy of the prediction model, i.e., average errors range from 1% to 15%. The effectiveness of CEDULE is verified as it can triple the efficiency of spare resources while meeting stringent SLOs.

CEDULE: A scheduling framework for burstable performance in cloud computing

Pinciroli, R.;
2018-01-01

Abstract

Cloud service providers use the concept of "burstable performance instance" that can temporally ramp up its performance to handle bursty workloads by utilizing spare resources. The state-of-the-practice to using the available burst capacity is independent of the workload, which results in squandering spare resources. In this work, we quantify and optimize the efficiency of using burst capacity so that it benefits both cloud service providers and end users. More specifically, we use a throttling mechanism as a control knob to continuously adapt the amount of spare resources based on workload characteristics such as traffic intensity. To identify optimal throttling, we integrate lightweight profiling and quantile regression in a synergistic way and build a prediction model that accurately predicts tail latency. We build an autonomic scheduling framework called CEDULE that can make adaptive scheduling decisions to maximize the efficiency of spare resources while achieving user defined SLOs. We conduct extensive experimental evaluations of the proposed scheduling framework on Amazon EC2 using popular benchmark applications, such as Sysbench, YCSB, and TPC-W. Experimental results demonstrate the high accuracy of the prediction model, i.e., average errors range from 1% to 15%. The effectiveness of CEDULE is verified as it can triple the efficiency of spare resources while meeting stringent SLOs.
2018
978-1-5386-5139-1
Adaptive scheduling, Burstable instances, Cloud computing, Scheduling
File in questo prodotto:
File Dimensione Formato  
2018_15thIEEEICAC_141_Ali.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 575.96 kB
Formato Adobe PDF
575.96 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12571/27431
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 13
social impact