Featured Image
  • Overview

    Kubernetes is a container orchestration solution that is open source that automates the deployment, scaling, and administration of applications. Google first developed it, but the Cloud Native Computing Foundation is currently in charge of its administration.

  • Scope

    Kubernetes architecture, explaining the key components and their interactions. We will cover the fundamental concepts of Kubernetes such as Pods, Nodes, and Control Plane, and explore how they work together to manage containerized applications. By the end of this article, readers will have a better understanding of the basic Kubernetes architecture and be able to visualize how their own applications can be deployed, scaled, and managed using Kubernetes.

Cloud Run is a serverless platform that is driven by Knative, a runtime environment that extends Kubernetes for serverless applications, and the Functions Framework. Cloud Run was developed by Google. Kubernetes is an open-source system that automates containerised applications' deployment, scaling, and management. It is also commonly referred to by its acronym, K8s.

Managing such applications becomes increasingly tricky when applications progress to include several containers hosted on various hosts. Kubernetes provides an open-source application programming interface (API) that defines where and how containers will execute to combat this complexity.

Kubernetes can orchestrate clusters of virtual machines and schedule containers to operate on them depending on the available computational capabilities of the cluster as well as the resource requirements of each container. Containers are organised into pods, which are the fundamental operational unit of Kubernetes. Pods scale according to the state that you want to achieve.

Additionally, Kubernetes is responsible for managing service discovery, offering load balancing, monitoring resource allocation, and scaling itself according to the amount of computing resources being used. In addition, it does a health check on each of the individual resources and enables programmes to perform self-healing by automatically restarting or replicating container instances. Kubernetes is a type of architecture that gives users a framework for discovering loosely connected services throughout a cluster.

A Kubernetes cluster may have zero, one, or more control planes and zero, one, or more compute nodes. The control plane is responsible for directing the cluster as a whole, providing the application programming interface (API), and scheduling when compute nodes will start up and shut down based on the required configuration. Every computational node is responsible for running a container runtime like Docker and an agent called kubelet, which is constantly communicating with the control plane.

Why Kubernetes and not just containers directly?

- Where should the containers run?
- Keep the containers running and health check
- Find the containers running
- Are all containers working well?
- Who has access to my containers?
- Scaling
- Networking

Architecture and Components

The Kubernetes architecture is based on the client-server model.

kubernetes-architecture-simplified-figure

The following items make up the essential parts of a Kubernetes cluster:

- Nodes: The nodes of a containerised application network are the virtual machines (VMs) or physical servers that host the applications. Each node that makes up a cluster has the capability of running one or more instances of an application. There might be as low as one node. However, a typical Kubernetes cluster will consist of numerous nodes (and deployments with hundreds or more nodes are relatively unusual) (and deployments with hundreds or more nodes are not uncommon).

- Image Registry: Container images are saved in the registry. The control plane is responsible for delivering those images to nodes for use in container pods where they will be executed.

- Pods: The most fundamental execution unit of a Kubernetes application is a pod, which can encapsulate one or more application containers. Each pod is equipped with its unique IP address and the requisite coding and storage capacity for operation. In addition, there are a variety of configuration possibilities for pods. A Pod is often made up of one or more containers linked to an application or business function that share a collection of resources and data with one another.

Kubernetes comprises master nodes and worker nodes

Kubernetes Control Plane (Master Node) Architecture

A Kubernetes cluster may be thought of as an airplane that's being piloted by a Kubernetes control plane. 

- kube-apiserver: One of Kubernetes control plane's many components is called kube-apiserver. The Kubernetes API, which serves as the communications hub, may be accessed through the API server, which, as its name suggests, permits users to do so. External communications conducted through the command line interface (CLI) or other user interfaces (UI) are routed through the kube-apiserver, as are all connections between the control plane and individual nodes.

- etcd: etcd is a key-value store used to store all of the data linked to clusters. etcd has a high level of availability and consistency due to the fact that it can only be accessed through its API server. In general, the information stored in etcd is structured in YAML (Yet Another Markup Language).

- kube-scheduler: Upon creating a new Pod, this component distributes it to a node for execution based on resource demands, policies, and 'affinity' criteria for geolocation and interference with other workloads.

- kube-controller-manager: Although a Kubernetes cluster may have multiple controller functions, these functions are all compiled into a single binary called kube-controller-manager.

The following controller activities are comprised of this procedure in its entirety:

- Node controller: It monitors each node's health and warns the cluster when nodes become online or unresponsive. 

- Replication controller: The replication controller ensures that the correct number of pods exist for each replicated pod functioning in the cluster.

- Endpoint controller: The Endpoints controller is responsible for connecting Pods and Services so that the Endpoints object may be populated. 

- Service Account and Token controllers are responsible for assigning API access tokens and default accounts to newly created namespaces inside the cluster. 

- cloud-controller-manager: If the cluster is based in the cloud in any capacity, whether partially or entirely, the cloud controller manager will connect the cluster to the API provided by the cloud provider. The only controls that will be put into action are those that are specific to the cloud provider. The cloud controller manager is not present in clusters that run entirely on a company's own premises. More than one cloud controller manager can run in a cluster. This can be done to increase overall cloud performance or provide fault tolerance.

The following components make up the parts of the cloud controller manager:

- Node controller: The node controller determines the condition of a cloud-based node that has stopped responding, i.e., whether or not it has been removed from the network. 

- Route controller: The route controller is responsible for establishing routes within the cloud provider's architecture.

- Service controller: The service controller is in charge of managing the load balancers for the cloud service provider.

Kubernetes Worker Node Architecture

Nodes are the computers on which Kubernetes install Pods for execution. Nodes might be virtual machines (VMs) or physical servers. 

Node components include:

- kubelet: Every node has an agent named kubelet. It ensures that the PodSpecs-described container operates as intended by testing its functionality.

- kube-proxy: kube-proxy is a network proxy installed on each node. It is responsible for maintaining network nodes and enabling communication between Pods and network sessions, regardless of whether they are located in the cluster (inside or outside). It does this by utilising the operating system's packet filtering capabilities if they are available.

- Container Runtime: Container runtime refers to the software that manages the process of running containerised applications. Docker is one the most commonly used container runtime, and Kubernetes supports any runtimes that comply with the Container Runtime Interface (CRI).

What extra infrastructure components does Kubernetes contain?

- Deployments: Deployments are a way of deploying containerised applications known as Pods. The desired state outlined in a Deployment will prompt controllers to change the current state of the cluster in a systematic fashion so that it can eventually reach the desired state. Investigate the phenomenon known as Kubernetes Deployments.

- ReplicaSet: The ReplicaSet feature guarantees that a certain number of Pods with the same configuration are active at any time.

- Cluster DNS: This service is responsible for providing the necessary DNS records for running Kubernetes services.

- Service: Pods are volatile, meaning Kubernetes does not guarantee a given physical pod will be kept alive (for instance, the replication controller might kill and start a new set of pods). A service instead represents a logical group of pods and functions as a gateway, enabling (client) pods to submit requests to the service without needing to keep track of which physical pods make up the service.

- Volume: Comparable to a container volume in Docker, except a Kubernetes volume applies to a whole pod and is mounted on all containers within the pod. Kubernetes ensures the persistence of data across container restarts. Only when the pod is destroyed will the volume be eliminated. Additionally, a pod may be associated with multiple volumes (possibly of various types).

- Namespace: A virtual cluster (a single physical cluster that can operate many virtual ones) is designed for environments with many users distributed across various teams or projects to isolate problems. Resources inside a namespace must be unique and cannot access resources in a separate namespace. A namespace may also be assigned a resource quota to prevent it from consuming more than its proportional share of the physical cluster's total resources.

- Container Resource Monitoring: Recording container metrics in a centralised database is part of the Container Resource Monitoring process.

Master Kubernetes architecture

UNLOCK THE SECRETS OF KUBERNETES ARCHITECTURE.

Simplify your understanding of Kubernetes architecture with our expert guidance

TALK TO US