This chapter aims to cover various compute solutions’ implementation.
We are going to cover the following main topics:
- Traditional application deployment versus containerized deployment
- Kubernetes architecture
- Google Kubernetes Engine (GKE) architecture
We are very excited to introduce the concept of container orchestration as I’m seeing an everyday increase in traction toward containers. Kubernetes and its native Google Cloud implementation GKE is a very sophisticated and innovative product on the market. We hope you will enjoy the journey with me.
Kubernetes is an open source platform for managing containerized workloads declaratively for configuration and automation. The name Kubernetes originates from Greek and means helmsman or pilot.
Kubernetes is well known for its abbreviation—K8s. K8s stands for K in Kubernetes, 8 is the count between the letters K and s, and s is the last letter from the word Kubernetes.
Google open sourced Kubernetes in 2014, combining more than 15 years of Google experience running containerized workloads. Kubernetes originates from Borg, an internal tool for orchestrating containers at Google.
To learn more about the history of Borg and Kubernetes, visit the following link: https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/.
GKE is a container orchestration platform. It is a managed service offered by Google Cloud that offers autoscaling and multi-cluster support.
The following section will talk about the differences between traditional, virtualized, and containerized deployments.
Traditional versus virtualized versus container deployment
This section is dedicated to understanding fundamental containerization concepts and how they differ from the previous deployment options:
Figure 5.1 – Various application deployment types
Let’s do a short review of different deployment types.
For forever, we used servers to deploy various types of applications. We could deploy multiple applications on a single server or spread them out to utilize only one dedicated server. Unfortunately, this led to two situations:
- A single server with just one application was most of the time underutilized, and resources such as the server itself, power, cooling, and storage were wasted and not utilized properly.
- A single server with multiple applications installed could be utilized much better, but overall server performance was quite often slow due to resource contention. Numerous applications had to fight over the CPU cycle, RAM allocation, or network bandwidth.
These issues increased the number of servers we needed to deploy, manage, and operate.
Virtualization introduced many improvements in comparison to traditional deployment. It allowed the running of multiple virtual machines (VMs) on a single server with their operating system, securely isolating and deploying them faster than on the physical servers. Virtualization increased servers’ utilization and ease of management and reduced hardware costs.
Containers are similar to VMs, but there is one significant difference. Containers don’t require an entire operating system and all libraries to run even the smallest application.
Containers are lightweight packages of the application packed with all necessary libraries, are portable, and are easy to update and manage.
In the next section, we will focus on the GKE architecture, which containers are a part of.