Skip to content

Somewhat dated (2011), but an interesting read.

A Measurement Study of Server Utilization in Public CloudsHuan Liu, in Proc. Int. Conf. on Cloud and Green Computing (CGC), Sydney, Australia, Dec. 2011

...an extensive study of server CPU utilization across two different public cloud providers. To measure a cloud physical server’s utilization, we launch a small probing Virtual Machine (VM) (often the smallest VM offered) in a cloud provider, and then from within the VM, we monitor the CPU utilization of the underlying hardware machine. Since a cloud is built around multi-tenancy, there are several other VMs running on the same physical hardware. By measuring the underlying hardware’s CPU utilization, we measure the collective CPU utilization of other VMs sitting on the same hardware

Kurt Marko:

By operating at different planes of abstraction, public cloud services and private cloud infrastructure make it virtually impossible to have a coherent, seamless hybrid cloud design. As organizations mature in their understanding and use of public services like AWS and move beyond simply treating them as rentable virtual server farms, they will internalize the public-private dichotomy and see that their hybrid cloud strategy has flaws. Until the industry better addresses the abstraction-layer mismatch, I expect to see more and more organizations rethinking their hybrid cloud plans.

OPA is a lightweight general-purpose policy engine that can be co-located with your service. You can integrate OPA as a sidecar, host-level daemon, or library.

Services offload policy decisions to OPA by executing queries. OPA evaluates policies and data to produce query results (which are sent back to the client). Policies are written in a high-level declarative language and can be loaded into OPA via the filesystem or well-defined APIs.

 

The Ship Show #46 podcast, The Epistemology of DevOps, originally published 13 August 2014, has many takeaways, generally focusing on issues associated with organizational and process "debt" that have little to do with the technical issues generally talked about in the "DevOps" context.

Participants:

12 min:52 sec - Primary focus of the podcast starts about here

14:13 Kevin Behr makes a remark about sensing "restlessness about scaling" in the industry. This isn't directly about the organizational/process debt discussed later, but insightful since posts and tweets about scaling concerns are pervasive today and have clearly been a concern for some time.

16:53 Engineers want to know, "Is there an RFC for DevOps so I can use it like a tool?"

17:30 We don't need to standardize DevOps (so it can be productized and sold)

18:05 Developers: "If you ops people would just expose an API so we can interact with you like robots..."

18:25 DevOps is not about optimizing for developers

20:25 Culture is an abstraction we invent to represent interactions in a system

25:30 Taylor did atomistic science, focusing on the individual. Lean focuses on the system.

26:25 Science means we don't know, so we have to keep asking in a structured way. Once we know, we do engineering.

30:35 Almost no company teaches how to improve daily

37:10 The ability to transmit information among people is the limiting factor in most organizations

44:00 coal miners cross training for more productivity and safety through learning in a complex, dangerous environment

46:15 increase response repertoire

47:00 ITIL good for simple environments where things are constrained

51:00 Difficulties in creating emergent teams to deal with problems

51:40 Meetings to deal with problems diffuse responsibility and people think they provide safety

52:00 ephemeral crews to deal with dynamic capabilities; cross silos like cells with permeable walls

54:00 the pragmatic maxim

55:15 Explore it by Elisabeth Hendrickson

56:45 Heresy in Devops is ok

59:20 What's wrong with the enterprise today? Everything is a project. Actually, we're on a permanent change footing.

61:35 We're not resources, we're humans.

62:15 conversation ends

 

 

 

 

 

 

 

 

 

In this highly recommended episode of DevOps Cafe, John Willis interviews Damon Edwards about his #DOES15 presentation, "DevOps Kaizen: Practical Steps to Start & Sustain a Transformation".

YouTube video of the DOES15 presentation. Slideshare deck.

A few takeaways:

  • High performing companies are different because they have the ability to improve by learning fast
  • Plan, Do Study, Act and Observe, Orient, Decide, Act are important variant descriptions of feedback loop techniques
  • Organizations are unable to improve because:
    • knowledge work isn't visible as it would be on a factory floor. It's locked in people's minds.
    • people don't understand the whole process for developing and delivering a feature, but only their own, limited context
    • "silo effects" act to keep organizations from understanding how to optimize to develop solutions for business goals. Everyone stays within their own limited scope.
  • Organizations often have a "big bang" transformation dream. Generally through, after some initial improvement, the magnitude of the required effort leads to fear, then panic. The initial goal is aborted and victory is declared after a few small improvements.
  • Instead of big bang approaches, decompose the effort into smaller, achievable goals, even micro goals, that build confidence with success.
  • Develop an organizational wide focus on service delivery metrics:
    • duration and predictability of lead time from inception to delivery
    • mean time to detect issues
    • mean time to repair issues
    • focus on quality at the source to minimize rework
  • Use a graphical representation to retrospectively describe a project. By doing so you will understand interactions across the organization to deliver a concept to a customer facing reality.
  • The retrospective analysis provides a horizontal description, across "silos," rather than focusing on each organizational component in isolation. Identify wastes, inefficiencies, and bottlenecks. Also, identify where "heroic" intervention is needed, since it's a constraint that can't be counted on. Then identify countermeasures.
  • Use improvement story boards focusing on:
    • targets,
    • improvement metrics,
    • work to be done,
    • current status,
    • blockers
  • The kaizen continuous improvement process can be an overlay for any delivery methodology

I'd consider watching the Youtube video, then listening to the podcast. Great content.

a rule of thumb for systems design from Gall's book Systemantics: How Systems Really Work and How They Fail. It states:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

– John Gall (1975, p.71) Wikipedia