Skip to content

Somewhat dated (2011), but an interesting read.

A Measurement Study of Server Utilization in Public CloudsHuan Liu, in Proc. Int. Conf. on Cloud and Green Computing (CGC), Sydney, Australia, Dec. 2011 extensive study of server CPU utilization across two different public cloud providers. To measure a cloud physical server’s utilization, we launch a small probing Virtual Machine (VM) (often the smallest VM offered) in a cloud provider, and then from within the VM, we monitor the CPU utilization of the underlying hardware machine. Since a cloud is built around multi-tenancy, there are several other VMs running on the same physical hardware. By measuring the underlying hardware’s CPU utilization, we measure the collective CPU utilization of other VMs sitting on the same hardware

Users of cloud services are presented with a bewildering choice of VM types and the choice of VM can have significant implications on performance and cost. In this paper we address the fundamental problem of accurately and economically choosing the best VM for a given workload and user goals. To address the problem of optimal VM selection, we present PARIS, a data-driven system that uses a novel hybrid offline and online data collection and modeling framework to provide accurate performance estimates with minimal data collection. PARIS is able to predict workload performance for different user-specified metrics, and resulting costs for a wide range of VM types and workloads across multiple cloud providers. When compared to a sophisticated baseline linear interpolation model using measured workload performance on two VM types, PARIS produces significantly better estimates of performance. For instance, it reduces runtime prediction error by a factor of 4 for some workloads on both AWS and Azure. The increased accuracy translates into a 45% reduction in user cost while maintaining performance.

Kurt Marko:

By operating at different planes of abstraction, public cloud services and private cloud infrastructure make it virtually impossible to have a coherent, seamless hybrid cloud design. As organizations mature in their understanding and use of public services like AWS and move beyond simply treating them as rentable virtual server farms, they will internalize the public-private dichotomy and see that their hybrid cloud strategy has flaws. Until the industry better addresses the abstraction-layer mismatch, I expect to see more and more organizations rethinking their hybrid cloud plans.

A good metric for noisy neighbor identification is high CPU steal. Some references:

The latest AWS Security Whitepaper, dated June, 2016, shows changes in AWS's adherence to standards associated with compliance. Some older, superseded standards were dropped with replacements. Also, some new standards were added.

Here is a summary, with "-" indicating removal and "+" indicating addition:

Signed up to trial Aliyun, the Chinese IaaS cloud, but found out that they currently aren't provisioned for pay as you go, on demand in the US (or Hong Kong, or Singapore). Monthly contract rates are available, so they have capacity, just odd allocation, considering the competition.

Aliyun IaaS on demand fail
Aliyun IaaS on demand fail

Without actually benchmarking, cost seems comparable to an AWS t2.medium instance with similar specifications: 2 core, 4Gb.

"whenever someone tells you that few companies use or need real-time, they’re likely referring to settings where human decision-makers are in the loop (“human real-time”). That misses the mark because the true impact of these technologies will be in applications with no humans in the loop. As UC Berkeley Professor Joe Hellerstein noted a while back, 'real-time is for robots.'"
--Ben Lorica