Kubernetes (Greek word for “governor”, “helmsman”, or “captain”) was developed by Google and heavily influenced by the Borg system. The original code name for Kubernetes within Google was Project Seven of Nine, a reference to a Star Trek character. It is written in Go (also known as Golang).
Kubernetes released v1.0 on July 21, 2015, with it Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF) and offered Kubernetes as a seed technology.
What is it?
Before getting to know about Kubernetes, we must know what are containers.
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Well, 83% of the software market is using Docker containers. Here each app in the below image is self-sufficient to handle requests.
Kubernetes provides orchestration software for deploying, managing, and scaling containers.
Why do we use it?
So first of all, we should know why we use containers then we will move on to our next question, which is, why Kubernetes?
Suppose, you want a different environment for each of your applications, then maybe your approach will be to use virtual machines (VM) on a system or server to fulfill your needs. Something like the below image:
What we see here, the load on the server increases as each VM has its own Guest OS whereas the container is lightweight, it uses Host OS for deploying the app. So, containers are faster, reliable, efficient, scalable and of course lightweight.
But there are problems while scaling up the container:
- Communication with each other: While scaling is not tough but have to configure them manually to communicate because it will be of no use if the containers could not communicate with each other.
- Deployed appropriately: If containers are deployed at random places (one in one particular cloud and another somewhere else) then management becomes a headache like configuring IPs and ports.
- Autoscaling: It does not support auto-scaling.
- Distributed Traffic is still a challenge.
So to overcome the above challenges Kubernetes comes into the scenario. It provides a tool that automates container deployment, container (de)scaling according to traffic, and container load balancing.
- Automatic Binpacking: It packages your application and automatically places containers based on requirements and resources available.
- Service Discovery & Load Balancing: It automatically configures IPs and port in containers and manages traffics as well.
- Storage Orchestration: You can mount the storage system of your choice. It can be local storage or cloud-based AWS storage.
- Self-Healing: If any of your containers fails then Kubernetes automatically restarts that container, it will create a new container and replace it with crashed one.
- Horizontal Scaling.
- Automatic Rollbacks & Rollouts
You have heard about Pokemon Go (the famous game of 2017 for android and iOS devices), an augmented reality game that has 500M+ downloads and 20M+ active users.
Initially, it was only launched in NA, Australia & New Zealand. Their expectation was that they can handle up to 5 times the traffic afterward the server will crash. But after the release of the app, the traffic came 50 times what they have predicted.
Thus, came the Kubernetes to rescue the Pokemon from giving up. How??? As we know, Kubernetes can automate both vertical and horizontal scaling at ease.