Three years ago docker started a chain reaction that will lead the entire IT industry to rethink the way we are using computers and the ways we are building software.
Following this change the industry is shifting from using separate computers for calculations and then combining results later to building applications that are architectured to run natively as a distributed system on multiple computers.
Docker itself was one of enablers for this change as it allowed us to package a computation task as an image and run it on any platform without the need for complex environment configuration.
Other techniques required to create distributed applications are microservices and continuous delivery. These three concepts/technologies help us to split our applications into small pieces - microservices, that can be continuously delivered to our target platforms like clouds or on premise servers using protective bubbles - containers.
Of course, during the last three years, we have realised that Docker containers on their own are not sufficient. To be able to continuously deploy the microservices, we need to orchestrate the container based systems and provide the internal plumbing such as networking, storage and of course improve the security of all the involved technologies.
As often occurs during the initial exploration period, there are a plethora of tools in each segment of the new infrastructure ecosystem. For example, Mesos, Kubernetes and Docker Swarm for scheduling, Weave and Project Calico for networking, ClusterHQ for data, GuardiCore, Twistlock and Scalock for security and many more like them and around them.
Now it is the time to begin the standardization of these technologies to enable effective use of them in the real world.
To achieve that we need to define the requirements for new cloud native applications and help to connect all these tools and technologies in a consistent environment that is stable, reliable and above all easy to use as a part of the normal software development and maintenance process.
This is exactly the goal of Linux Foundation’s collaborative project called Cloud Native Computing Foundation. The mission of the foundation is defined as:
The Cloud Native Computing Foundation’s mission is to create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments. The participants believe that systems architected will be:
- Container packaged: Running in application containers as a unit of application deployment and as a mechanism to achieve high levels of resource isolation in order to improve the overall developer experience, foster code reuse and simplify operations.
- Dynamically managed: Actively scheduled and managed by a central orchestrating process to radically improve machine efficiency, while reducing the cost associated with maintenance and operations.
- Micro-services oriented: Loosely coupled with dependencies explicitly described through service endpoints with the goal of significantly increasing the overall agility and maintainability of applications.
The foundation already includes most of the important players in the field of programmable infrastructure such as Google, Cisco, Docker, Mesosphere, CoreOS and many others. All the companies that support the foundation will now work together to define the meaning of the Cloud Native Application and to create the technical base to run such applications using a consistent set of tools and technologies that will easily fit together.
Having worked with many of these tools since their conception, we at Container Solutions are incredibly excited to witness, and take part in, this next evolution in the history of computing.