How can you create and use a Kubernetes cluster?
A Kubernetes cluster is a system consisting of at least one control node (master) and multiple worker nodes where containerized applications are executed. The cluster automatically manages the deployment, scaling, and resilience of the containers. This structure enables applications to be run reliably and efficiently in distributed environments.
The ideal platform for demanding, highly scalable container applications. Managed Kubernetes works with many cloud-native solutions and includes 24/7 expert support.
What is a Kubernetes cluster?
In general, clusters are a group of computers that appear as a single unified system from the outside. In the case of Kubernetes, several nodes are combined into clusters instead of physical computers. The nodes can be either physical or virtual machines.
The individual applications run on the Kubernetes clusters. Therefore, Kubernetes clusters represent the highest level in the Kubernetes hierarchy.
What are use cases for Kubernetes clusters?
Clusters are an essential part of leveraging the benefits of Kubernetes. Only clusters allow you to deploy your applications without binding them to specific machines. They primarily help abstract your containers for cross-computer execution. They are not tied to a specific operating system and are therefore highly portable.
Typical application areas include:
- Deployment of complete applications: These span across containers and are independent of the underlying hardware. This allows updates or new features to be rolled out quickly without requiring changes to individual servers. As a result, your applications run consistently in every environment.
- Operation in microservice architectures: In this setup, applications can communicate with each other and remain highly scalable. Individual microservices can be developed, updated, and scaled independently, significantly increasing the agility and fault tolerance of the overall application.
- Continuous integration (CI) / continuous delivery (CD): Continuous integration or continuous delivery jobs enable the automation of build, test, and deployment processes. This shortens development cycles, reduces manual sources of error, and ensures that new features or patches reach production faster.
What is a Kubernetes cluster made of?
A Kubernetes cluster consists of a control unit, also known as the master node, and one or more worker nodes.
Master Node
The master node forms the foundation of the entire Kubernetes cluster. It is responsible for managing the entire cluster, primarily overseeing the state of the cluster by determining which application runs and when. The control unit is divided into different components:
- API server: The API server acts as the frontend of the master node and manages communication with the Kubernetes cluster. It defines the cluster’s state and provides the main access point to the Kubernetes API, which can be used via the command line or the graphical user interface in the Google Cloud Console.
- Scheduler: The scheduler assigns containers to nodes based on available resources. It ensures that all Kubernetes pods (groups of containers) are properly scheduled on a node and can run as intended.
- Controller manager: The controller manager oversees different controllers, which are essentially background processes. It triggers the necessary actions if individual nodes fail and constantly works to align the cluster’s current state with the desired state.
- etcd: etcd is the key-value database that stores all critical cluster data. As part of the control plane, it serves as Kubernetes’ backup storage, keeping configuration and state information consistent across the system. It is organized as a key-value store.
Worker nodes
Each Kubernetes cluster has at least one, but in most cases, several worker nodes. These nodes execute the tasks and applications assigned to them by the control unit. The worker nodes consist of two components:
- Kubelet: Kubelet is a component of worker nodes that ensures each container in a pod is running. To achieve this, Kubelet interacts with the utilized container engine, a program for container creation and management.
- Kube-Proxy: Kube-Proxy ensures that network rules are adhered to. Additionally, this component is responsible for performing connection forwarding.
How to create a Kubernetes cluster
A Kubernetes cluster can be deployed on either virtual or physical machines. There are various ways to create your own clusters.
Setting up Kubernetes Cluster with Minikube
To create a simple cluster with one worker node, Minikube can be used. It is a tool that allows Kubernetes to run locally on your own computer. Minikube can be installed on all common operating systems and is extensively described by many Kubernetes tutorials. To check if your installation was successful, you can enter the following command in the terminal:
minikube version
bashYou start Minikube with the following command:
minikube start
bashAfter executing this command, Minikube starts a virtual machine. A cluster runs automatically within it. To interact with Kubernetes, you can use the Kubernetes command line. To check if it’s installed, use the following terminal command:
kubectl version
bashYou can view the details of your cluster with the following command:
kubectl cluster-info
bashYou can also view the individual nodes where your applications can run directly in the terminal:
kubectl get nodes
bashCreating Kubernetes cluster with kind
If you want to create a Kubernetes cluster with more than one node, you can use the tool kind. Kind is also available for all common operating systems. The installation is easiest through a package manager. In the examples shown here, choco is used for Windows:
choco install kind
bashFor a cluster with multiple worker nodes, you now create a YAML-configuration file in any directory. In this file, you define the structure of your cluster. A configuration file for a Kubernetes cluster with one master and two worker nodes might look like this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
You can then create the Kubernetes cluster according to the configuration you selected with the following command:
kind create cluster --config example-file.yaml