How to create Kubernetes clusters

Kubernetes is a popular system for managing containers. One of the most important entities in the whole Kubernetes system is formed in “clusters”, which are always used in Kubernetes.

Managed Kubernetes from IONOS

The easy way to manage container workloads. Fully automated Kubernetes cluster setups and maximum visibility and control of K8s clusters.

Persistent Storage
24/7 expert support
Automated cluster setup

What is a Kubernetes cluster?

Clusters are generally a group of computers that appear self-contained. In Kubernetes, instead of physical computers, multiple nodes are combined into clusters. The nodes are either virtual or physical machines.

The individual applications are executed in the Kubernetes clusters. Kubernetes clusters are therefore the highest level in the Kubernetes hierarchy.

Possible applications of Kubernetes clusters

Clusters are an important part of being able to make real use of Kubernetes. It is clusters that allow you to deploy your applications without tying them to machines. So, their main purpose is to abstract your containers and run them across machines. At the same time, they are not tied to a specific operating system and are therefore highly flexible.

Clusters are also great for using microservices. Applications deployed with them can communicate with each other thanks to Kubernetes. This method of deployment ensures high scalability, so you can always adjust your resource usage in the best possible way.

Clusters can also be used to run continuous integration or continuous delivery jobs.


IONOS Managed Kubernetes offers Kubernetes clusters perfectly integrated into the IONOS ecosystem.

What constitutes a Kubernetes cluster?

A Kubernetes cluster consists of a control unit, also called a master node, and one or more worker nodes.

Master node

The master node is the foundation of the entire cluster. It is responsible for the administration of the entire cluster. In doing so, it takes care of the state of the cluster, for example, by determining which application is executed and when. The controller, in turn, is divided into various components

  • API server
  • Scheduler
  • Controller manager
  • Etcd

API server

The API server is the front end of the master node and coordinates communication with the Kubernetes cluster. The interface is used to define the state of the cluster. Interaction with the Kubernetes API is possible either via the command line or via the user interface in the Google Cloud Console.


The Scheduler takes care of deploying containers based on the available resources. It ensures that all pods (container groups) are assigned to a node and can therefore be executed.

Controller Manager

The Controller Manager is used to coordinate the various controllers, which are basically processes. Among other things, this ensures that appropriate action is taken if individual nodes fail. More generally, the Controller Manager takes care of adjusting the current state of a cluster to the desired state at any time.


The etcd is a component of the controller that stores all important cluster data. This means that etcd can be considered as a backup storage for Kubernetes.

Worker nodes

Each cluster has at least one and often several worker nodes. These execute the tasks and applications assigned to them by the control unit. The worker nodes include two components:

  • Kubelet
  • Kube-proxy


Kubelet is a component of worker nodes that ensures that each container runs in a pod. To do this, Kubelet interacts with Docker Engine, a container creation and management program.


Using the kube-proxy ensures that network rules are followed. The component is also responsible for performing connection forwarding.

Creating Kubernetes clusters

A Kubernetes cluster can be deployed on either virtual or physical machines. To create your own clusters, there are several options.


You can learn how to install Kubernetes and work with clusters in Kubernetes in detail in our Kubernetes tutorial.


To create a simple cluster with a worker node, you can use Minikube. This is a tool to run Kubernetes locally on your own machine. It can be installed on all major operating systems. To check whether your installation of Minikube was successful, you can enter the following command in the terminal:

minikube version

Use the following statement to start Minikube:

minikube start

After you run this command, Minikube starts a virtual machine. A cluster will automatically run in it. To interact with Kubernetes, you can use the Kubernetes commandline. To find out if it is installed, use the following terminal command:

kubectl version

You can view the details of your cluster with

kubectl cluster-info

to display them. You can also view the individual nodes on which your applications can run directly in the terminal:

kubectl get nodes


If you want to create a Kubernetes cluster with more than one node, you can use the kind tool for this. Kind is also available for all major operating systems. The easiest way to install it is via package manager. In the examples shown here, choco for Windows is used:

choco install kind

For a cluster with multiple worker nodes, now create a yaml configuration file in any directory. In this file, you define the structure of your cluster. For example, a configuration file for a Kubernetes cluster with one master and two worker nodes might look like this:

kind: Cluster
- role: control-plane
- role: worker
- role: worker

Next, you can create a Kubernetes cluster according to your chosen configuration with the following command:

kind create cluster --config examplefile.yaml
We use cookies on our website to provide you with the best possible user experience. By continuing to use our website or services, you agree to their use. More Information.