This post will cover the details highlighted in our Kubernetes 101 webinar. Its main focus was on the key components of modern cloud infrastructure and technologies. It will cover containers, Docker and Kubernetes. With the incredible impact that these have had and will continue to have on the industry, it is important that we share more information about them. The post will go on to consider some of the juggernauts that are currently using Kubernetes.
We are going to talk a bit in general about some of the changes that have happened in recent times around development practices that have sort of led to the rise of the use of containers. Let us then also look at the differences between containers and virtual machines.
We are then going to look at some of the use cases that we see from our customers, of how they use containers and some of the problems that people experience. And, we are also going to look at Kubernetes and how it is really changing how people use containers, and also the cloud internally. We are going to finish up with a demonstration and we have some time left over, we will go through and look at any questions that have been asked and provide answers to those.
Well, firstly I want to give you a bit of background about Mobilise. Mobilise is a leading provider of public cloud enablement services. So, this means we help customers in their journeys towards the cloud. Whether that is designing, implementing, migrating, some optimisation processes and also providing many services to help ensure that our customers get the most of their spend with public cloud providers.
Mobilise is also a Certified Kubernetes Service Provider. We are one of only a handful in the UK. As a company, our team is UK based and provides all services from within the UK. As an organisation, we are ISO27001 & 9001 accredited. This is to give our customers the assurance that our processes are robust and in line with industry expectations.
Let’s have a look at a brief history of how the industry arrived at Kubernetes solution. In the past applications were built as large monolithic systems that pointed servers and data centres. It made the applications hard to support and maintain, as well as difficult to reuse and develop as internal components of the application were so target coupled. This meant that testing took an extremely long time as a change to one component meant progressive testing to the whole application. It required a slow build and release cycle.
If a customer wished to have a new change implemented quickly to adjust to a new trend in the market this often proved difficult and expensive. The move to virtual machines meant that businesses could downsize their infrastructure and reduce cost in data centres. It didn’t, however, solve the application problem. Since then a new way of designing applications called microservices ushered in a large change in the way that applications are architectured and deployed.
The idea is that instead of having one large application, the individual components are split out, de-coupled and deployed independently. This speeds up development, allowing individual components to be tested far easier and supported with a greater understanding. No longer do application support staff have to understand how the whole application works; just the parts they are responsible for. Developers can easily reuse components in their new applications. Testing can be carried out against specific parts of the application.
Containers are a natural fit for microservices as they are lightweight just like the microservices they run. They enable businesses to easily take advantage of cloud practices such as auto-scaling, which is replicating our applications to meet customer demands, Just in time execution, shuttling down the application when it is not being used and maximising resources through scheduling.
The idea of containerisation has been around for a long time. However, it wasn’t until 2013 when Docker emerged that containers exploded in popularity. To date, per 3.5 million applications have been placed in containers using Docker technology and over 37 billion containerised applications have been downloaded. The application container market is expected to grow rapidly from annual revenue of $750 million to $3.4 billion by 2021.
What are containers
Containers are a method or operating system of virtualisation that allow you to run an application and all of its dependencies in a resource isolated package. You are probably thinking that you’ve been able to do this for years in virtual machines, so what is so different about containers? What makes it so new?
Virtual machines work by packaging the operating system and applications together. When you have several virtual machines on the same server it creates a lot of overhead and limits the number of applications that you can deploy. Containers, however, work differently. They only contain the applications, libraries and binaries that they depend on. The only operating system that is on the same server is the host operating system.
This is utilised by Docker to run the containers directly on the hosting machine. This means that we can fit more applications onto a server running containers than we can virtual machines. A good example is if you want to run two copies of Application A and one of B, then we would need 3 virtual machines complete with their operating systems. That makes a total of 4 operating systems, including the host OS.
In the container example, you can run 4 copies of application A and two copies of B on just one operating system and still consume fewer resources than the virtual machine example.
Benefits of VMs
This is not to say that we have completely written off Virtual Machines. Virtual Machines still have their place. There are still some benefits to using virtual machines, such as the operating system resources are still available to applications if you require all of the resources. There is also a mature eco-system of tools for management and security. You would also benefit from a widespread industry understanding of how to implement and manage virtual machines.
Benefits of Containers
Having said that, containers also provide a number of benefits that are quite compelling. These include things such as reduced IT management. With fewer servers deployed, you will not need as many system administrators managing the server estate. There is also a reduction in the size of your snapshots. This means that there is less cost incurred through storage and transfer of those. There is a much faster startup time on applications as well.
So, because the containers are much more lightweight and do not have to boot up the OS every time. Containers generally start in milliseconds whereas virtual machines can take tens of seconds to load. They also reduce and simplify your security updates. Virtual machines require that you support a number of operating systems and apply security patches and updates over time. With containers, you only have one security update to apply on the host OS.
It is great to see all of these benefits on paper. What does it actually mean in the real world? Let us consider how we see people using these. We have a list of use cases
Distributed Application and Microservices
This is one that we have already touched one. It is the breaking down large applications into smaller de-coupled microservices for easier deployment, support, maintenance, coding practices and repeatability.
You can package your jobs into containers to maximise available resources. You can even spin up the same instance of the job and run the two in a parallel isolated container. Once finished they can terminate themselves and free up resources for other containers to have.
Continuous Integration and Delivery
Build a CI/CD pipeline based on containers which package applications and deploy them through containers. You always have the deployment toolset at a known version and configuration.
As mentioned, this allows for the reduction of infrastructure footprint. Fewer servers, less spend.
Lift and shift containers from one cloud provider to another or to own premises. You can also bring the two together in a hybrid cloud solution.
Easily scale out copies of the containers to provide high availability.
No more issues of the application working on one machine but not in a development environment. You can have images of your machine that work in all environments, from DEV through to production due to the nature of containers.
If the developers are having problems they can just download the application to your machine and work on it collaboratively.
Consolidate the state of your machine to a Dockerised server. There is less to support and less to manage.
Problems with running containers
All of these advantages sound great. It is not all smooth sailing. There are certain issues that one will encounter when running containers. We usually come across the same sort of issues that one would encounter when they use virtual machines. It is difficult to keep track when running multiple containers at the same time without visualisation or abstract layer to manage them.
Imagine having a Dockerised with 50 plus containers deployed to it. How do we automatically scale each of our containers to meet demand? And how do we keep track of the health of our containers? How do we make sure that we remove unused containers or orphaned containers? And how do we root track from outside of our system into the correct container?
How Can We Fix This?
A lot of the problems that we have mentioned above have now been solved through container orchestration. The most popular product of which is Kubernetes. So, what is Kubernetes? Kubernetes provides a platform for the orchestration of containers, enabling us to maximise server resources by scheduling the running of pods. Pods are a Kubernetes term for a group of pods that are deployed together. Usually, the same application is duplicated in pods.
A Kubernetes platform is made up of a master node which handles all of the administration, and several worker nodes which run the pods. We can increase the number of worker nodes based on how many pods we have running. Scheduling means that we can squeeze as many pods as possible into each worker node, allowing us to maximise the use of resources and possibly shut down unused worker nodes. Through the use of Kubernetes dashboard, your container state can now the visualised allowing administrators to better manage and support their system.
Kubernetes will manage all of the networking and resource your nodular state so your microservices can easily talk to each other through services, a feature of Kubernetes which acts as a virtual log
Allowing Kubernetes to manage all of the networking, resourcing and deployment mean that we can focus on what really matters, creating and developing new services for our customers. What about the history of Kubernetes? The original Kubernetes can be traced back to the early 2000s, as Google’s internal board project. Google’s bog is a cluster manager that manages 100s of thousands of jobs for many thousands of different applications across a number of clusters, each with 100s of different machines.
As Google’s own requirements shifted the technology was updated to reflect this change. In 2014, Google launched an Open Source Project called Kubernetes. It is now supported by over 830 contributors who have collectively put in 237 years of coding effort to date. It recently graduated as the first project from the Cloud Native Computing Foundation, which signifies its maturity and resilience in managing containers at scale.
Who is Using Kubernetes?
It is great to see that Kubernetes has been so widely supported. As we know technologies such as Kubernetes and Docker are quite young in comparison to tradition virtualisation tools. Perhaps we should take a look at some of the other organisations that are already trusting Kubernetes with their workloads. You can see this from the Kubernetes website. You have people such as IBM, Goldman Sachs, ING, CapitalOne. All of these organisations are already moving their workloads using containers and Kubernetes to deliver additional efficiency within their estates.
Why use Kubernetes?
Kubernetes eliminates infrastructure vendor lock-in by providing core capabilities for containers without imposing structures. It achieves this through a combination of features within the Kubernetes platform including pods and services. Kubernetes can be deployed to all major public cloud providers as well as being installed on-premises. Some of the major cloud providers even provide a managed Kubernetes service.
The modular approach enables faster development by smaller more focused teams that are each responsible for specific containers. Concepts like namespaces allow us to segregate applications within the cluster to provide container isolation and security between the applications. Deployment features like horizontal auto-scaling, which is duplicating our app based on container resource usage or rolling updates which provide near-zero downtime or canary deployment which enable you to test your new deployment in production while slowly scaling it up and checking results while slowly scaling down previous deployments.
Kubernetes marks a breakthrough for DevOps because it allows teams to keep pace with the requirements of modern software development. In the absence of Kubernetes, there will often be forced to strip down software deployment, scaling and updating. Some organisations employ large teams to handle these tasks alone. Kubernetes allows us to derive maximum utility from containers and build cloud-native applications that could be run anywhere independent of cloud-specific requirements. Now, businesses can easily get their applications into a production environment with the flexibility needed to deploy beta or prototype pieces of work.
To tie together all of the information that we have provided a demonstration of the platform in action will be the perfect way to do it. This will show you how things work on Kubernetes.
Learn how to secure Kubernetes clusters, click the link below.