The evolution of internet scale enterprise

The Future of Containers in the Enterprise

The evolution of internet scale enterprise

February 3, 2016

Craig McLuckie, Co-founder Kubernetes Project

Over the last decade, Google has invested tremendous amounts of development resources to create systems to enable us to operate at scale. Creating complex applications that deal with huge traffic and data volumes has caused us to take novel approaches to both the architecture of applications, and our overall operating model.

The industry has started to call this approach ‘cloud native computing’, and there are three core properties that distinguish it from traditional systems:

  1. Container packaged. Running in application containers as an hermetic unit of application deployment make deployment more predictable. It is impractical to use traditional imperative or scripted deployments at the scale we operate. Any degrees of freedom in a system multiply with scale and become impractical at internet scale.
  2. Dynamically managed. It would be practically impossible for operators to be involved in scheduling and managing the more than 2 billion containers we launch a week. We instead use smart systems that rely on the higher levels of insight containers offer to make good decisions about how many jobs to run and where to run them. This radically increases efficiency and lets our developers just focus on writing code. It also lets us create small, specialized, highly empowered operations teams that focus on providing common services.
  3. Micro-services oriented. All our systems rely on loosely coupled architectures with dependencies explicitly described through service endpoints. This makes our systems more agile and also radically increases code reuse. We don’t need to create new instances of common services for each our our sub-systems.

Outside of Google, this approach is being dubbed ‘GIFEE’ -- Google Infrastructure for Everyone Else. Interestingly, it could also be called ‘FIFEE’ or ‘TIFEE’ -- both Facebook and Twitter have adopted similar approaches to computing to deal with their scale of operations. The specific details vary, but the basic patterns are consistent. This is technical coevolution at its best. There is only one practical approach to dealing with operations at internet scale and, to date, each internet company has had to create their own rendition of the same basic patterns.

As we look to the future we see more traditional enterprises being forced to tackle internet scale problems. IoT is driving unprecedented traffic levels to businesses. A more highly connected and mobile enabled customer base and workforce require internet scale solutions to support. It is inevitable that every enterprise will have to tackle these challenges, and as a community it makes sense to come together and assemble robust technical stacks to support these companies.

The Next Phase in Containers: Standardization

Until recently we have seen technology companies working in isolation on critical technologies to support this transformation to cloud native computing. The problem with an individualist approach is that it requires each vendor to deliver a ‘whole stack’. Without standards for container runtime, orchestration, common services and the myriad other pieces that go into making up a cloud native stack each company is an island and only a few would have a shot at delivering a whole solution.

It is our belief that everyone benefits from the ability to safely specialize. If a startup has a great idea around how to improve the container runtime environment, they should be able to go ahead and create a unique runtime environment, without having to pursue their own redistributable image format. If another startup has a great idea to around scheduling to solve a specific workloads issues, they should be able to build and sell that without having to create a whole stack.

With that in mind, as we looked to the future of Kubernetes, the container orchestration technology that was built by the same team that built Google’s internal orchestration and scheduling system (known as Borg), it made sense to contribute it to a foundation, and work with the broader community to harmonize a series of interoperable ‘stacks’ that would support Cloud Native Computing for everyone. That is why we reached out to Linux foundation, and a broad collection of technology partners (Intel, Red Hat, Cisco, IBM, VMWare, Docker, CoreOs, Mesosphere and many more) and created the Cloud Native Computing Foundation (CNCF).

A new approach to standards.

Our goal with CNCF was not to create a traditional standards organization (i.e. define standards, then produce reference implementations), but rather to create a place where we could assemble relevant technologies under vendor neutral governance, then over time harmonize those technologies and based on what worked for our user base evolve standard interfaces between layers of the stack.

Our goal is to create a clean architecture with clean interfaces (APIs) and then rely on reference implementations as semantic standards for the various pieces. Vendors can extend any area of the stack in any that makes sense to them, but then rely on a robust qualification test suite to be ‘certified’ as being compatible with the reference implementation. API based consistency is not enough, semantic consistency is essential for our collective customers to be able to plug out parts of the stack and assemble their own renditions.

With that in mind, we set out the following as our core values for the foundation:

  1. Fast is better than slow. We must ensure that projects progress at high rates of velocity to support aggressive adoption by users.
  2. Open. The foundation must be open and accessible, and operate independently of specific partisan corporate interests. It must accept all comers based on the merit of their contributions, and it’s technology must be available to all according to open source values.
  3. Fair. We must create a level playing field that allows smaller companies that are driving innovation to participate at the same level as established companies.
  4. Consistency. Technologies should, where possible, look and feel consistent in terms of style, philosophy and approach.
  5. Strong technical identity. The foundation must achieve and maintain a high degree of technical autonomy and a strong sense of technical identity that is shared across the projects.
  6. Clear boundaries. It must be clear what the goals and in some cases more importantly what the non-goals of the foundation is to allow projects to effectively co-exist, and to help the ecosystem understand where to focus for new innovation.

A novel governance model.

To make this work, we had to think outside the box a little in terms of the structure of the foundation. While we admire the work that has been done by many existing foundations, there were some key goals that we felt were not being achieved by many. Instead of instituting a traditional business governance board, we decided to try something new for this foundation:

  1. Hold strong technical opinion. Our ambition is to bring disruptive new technology to market, and help the world transition to a better way of operating. To do that we needed a highly empowered governance body that would not be tied to any single vendor’s interests. With that in mind we created the Technical Oversight Committee -- an elected group of 9 individuals who will drive the technical vision for the group, engage with sub-projects and resolve technical disputes. These individuals are to be selected from industry based on lifetime contributions in the field and answer to the community, not to specific vendor’s interests.
  2. Be accountable to the end user. Too often we see foundations being driven by vendors without strong accountability to the end user.’ To counter this we are actively recruiting for an empowered End User Committee. The end user committee will represent the interests of the end user to the technical oversight committee (just as the business committee will represent the interest of vendors).

Our hope that by creating these checks and balances, we will create a stable, focused community works that drives innovation and legitimately move the world forwards with Cloud Native Computing technology.

How to get involved.

This accountability to the end users requires involvement from the broad community of companies who have been considering or struggling through the transition from traditional architectures to a cloud native architecture.

There are opportunities for participation through the CNCF’s Open Source projects. To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. As the Technical Oversight Committee brings in additional projects, beyond Kubernetes, this avenue for participation will grow.

Additionally, the CNCF needs multiple perspectives to help guide our activities. Joining the CNCF as and End User Member will ensure your voices are heard.