Ceph is a dis­trib­uted storage system that easily in­te­grates with Proxmox and provides a highly available, fault-tolerant storage solution. This guide walks you through in­stalling a Ceph cluster on your Proxmox server step by step.

Step 1: Check the pre­req­ui­sites

Before you begin in­stalling Ceph on Proxmox, make sure your en­vi­ron­ment meets the basic re­quire­ments. Ceph is a storage system that repli­cates data across multiple servers. To ensure this re­dun­dan­cy works reliably, you need at least three Proxmox nodes. This allows the system to keep running even if one node fails.

Make sure that the bare-metal in­stal­la­tion of Proxmox is also complete on every server and that each system is fully up to date. Each node should have its own, unused hard drive dedicated solely to Ceph OSDs. These drives will provide the actual storage for your cluster. A fast, stable network con­nec­tion between nodes is equally important to keep latency low. You also need root access on all hosts since the in­stal­la­tion makes system-level changes.

Use the following command to check which version of Proxmox is currently installed on your system:

pveversion
bash

Compare the version numbers across all nodes. If they differ or if your in­stal­la­tion is outdated, update Proxmox so all systems are on the same version:

apt update && apt full-upgrade -y 
reboot
bash

Once all nodes are updated and reachable, your en­vi­ron­ment is ready for the Ceph in­stal­la­tion.

Note

Although not a strict re­quire­ment, you should use SSDs for pro­duc­tion de­ploy­ments of Proxmox and Ceph. Ceph benefits sig­nif­i­cant­ly from fast read and write speeds because it repli­cates every dataset multiple times and dis­trib­utes the copies across several nodes.

Dedicated Servers
Per­for­mance through in­no­va­tion
  • Dedicated en­ter­prise hardware
  • Con­fig­urable hardware equipment
  • ISO-certified data centers

Step 2: Activate the Ceph repos­i­to­ry

To install Ceph through the package manager, you first need to enable the ap­pro­pri­ate repos­i­to­ry on each Proxmox node. This repos­i­to­ry contains all required Ceph packages that Proxmox has adapted and tested. Log in as the root user on each host and run this command:

pveceph install
bash

This command con­fig­ures the Proxmox Ceph repos­i­to­ry and installs the basic Ceph com­po­nents. To activate the new package sources, update your package list:

apt update
bash

Step 3: Ini­tial­ize the Ceph con­fig­u­ra­tion on the first node

In this step, you’ll prepare the actual Ceph cluster on your first Proxmox node and define the network that the cluster will use for internal com­mu­ni­ca­tion. You’ll also set up the first monitor, a core component of Ceph. It tracks the cluster’s state, manages cluster members and ensures all com­po­nents stay syn­chro­nized.

Start the ini­tial­iza­tion on the first Proxmox node with the following command:

pveceph init --network 10.0.0.0/24
bash

The subnet 10.0.0.0/24 is only an example. Use the internal network your Proxmox nodes use to com­mu­ni­cate directly with each other. The pveceph init command creates the basic Ceph con­fig­u­ra­tion on your first node. This includes the main cluster con­fig­u­ra­tion file, the Ceph keyring needed for internal au­then­ti­ca­tion and the system di­rec­to­ries for Ceph services.

Once the ini­tial­iza­tion is complete, you can set up the first monitor service:

pveceph createmon
bash

This command starts the monitor process and registers it in the cluster. At this point, you have a func­tion­al but still stand­alone node. The monitor im­me­di­ate­ly begins col­lect­ing status in­for­ma­tion, forming the foun­da­tion for com­mu­ni­ca­tion with ad­di­tion­al nodes.

Note

A typical Ceph cluster uses at least three monitors. This ensures the cluster can keep operating even if one monitor fails. With multiple monitors, Ceph can maintain a quorum, meaning a majority is available to make decisions about the cluster’s current state.

Step 4: Add more nodes to the cluster

To give Ceph the level of fault tolerance it’s designed for, you now need to add your remaining Proxmox nodes to the Ceph cluster. Each ad­di­tion­al node increases both re­dun­dan­cy and storage capacity. Log in to the other nodes and run the following commands in sequence:

pveceph install 
pveceph createmon
bash

This sets up monitor services on the ad­di­tion­al hosts. Once all monitors are active, you can check the cluster status from any node using the following command:

ceph -s
bash

This will show you which monitors and services are currently running. If multiple monitors appear, all nodes have been added to the cluster.

Step 5: Create OSDs

OSDs (Object Storage Daemons) are the core of your Ceph cluster. Every hard drive you assign to Ceph is used to create a dedicated OSD. These daemons write data to the disks, replicate it across the cluster, and serve it back when requested by another node or a virtual machine. The more OSDs your cluster has, the higher its storage capacity and per­for­mance. Before you begin, check which drives are available on your node using this command:

lsblk
bash

This lists all disks and par­ti­tions detected by the system. Make sure to only use unused drives for Ceph. Only select drives that do not contain the operating system and are not mounted. Once you’ve iden­ti­fied a suitable drive, in our case /dev/sdb, you can create an OSD on it:

pveceph createosd /dev/sdb
bash

The drive is au­to­mat­i­cal­ly formatted, and Ceph sets up the required structure. The OSD daemon then starts and joins the cluster. All existing data on the selected drive will be deleted, so double-check that the disk is truly intended for Ceph.

Repeat this process on all nodes and for each drive you want to add. Depending on your hardware and cluster size, it may take a few minutes for all OSDs to be fully in­te­grat­ed into the cluster.

Next, check your newly created OSDs have been rec­og­nized and are running. Use:

ceph osd tree
bash

The tree view makes it easy to see how your storage devices are dis­trib­uted across the cluster and whether they are running without issues.

Step 6: Enable the Ceph Manager and dashboard

To easily monitor and manage your Ceph cluster, you need to install the Ceph Manager (MGR). This service collects per­for­mance data, keeps track of all active com­po­nents and provides ad­di­tion­al features through various modules. One of these features is the in­te­grat­ed web dashboard. You can install the manager service on your Proxmox node with this command:

pveceph createmgr
bash

Once the manager is running, you can enable the dashboard module. The MGR service provides it au­to­mat­i­cal­ly, so you only need to activate it:

ceph mgr module enable dashboard
bash

The dashboard provides a user-friendly interface for checking the cluster status and tracking OSD and monitor usage. It also high­lights any alerts at a glance. Open it in your browser using the default port 8443:

https://<PROXMOX_IP>:8443

Replace PROXMOX_IP with the IP address of the Proxmox node where the Ceph manager is installed.

Step 7: Create and test Ceph pools

Once your Ceph cluster is set up and all OSDs are active, you can create the actual storage area where your data will live. Ceph organizes data into pools. A pool is a logical unit that stores your files, disk images or container volumes. Each pool consists of many placement groups that dis­trib­ute data across the OSDs to balance the load. With pools, you control how and where Ceph stores your data. For example, you can create one pool for virtual machines and another for backups or container images.

To create a new pool, run the following command on one of your Proxmox nodes:

pveceph pool create cephpool --size 3 --min_size 2 --pg_num 128
bash

This command creates a pool named cephpool. The pa­ra­me­ters define how Ceph handles your data:

  • --size 3 means each file is stored three times. This provides fault tolerance as two copies are still available if one OSD fails.
  • --min_size 2 requires at least two copies to be active for the pool to function. This prevents Ceph from operating with in­com­plete data.
  • --pg_num 128 sets the number of placement groups, the logical data con­tain­ers Ceph uses to dis­trib­ute data across the OSDs. The more OSDs you have, the higher this value can be. This allows Ceph to dis­trib­ute data more evenly across the cluster.
Note

The number of placement groups you set when creating a Ceph pool cannot be reduced later. You can increase the number as your cluster grows but lowering it is not supported as it can lead to data loss. So, make sure you plan for enough PGs from the start. As a rule, 100 PGs per OSD in the pool is a good starting point for small to medium en­vi­ron­ments.

After creating the pool, verify every­thing is working correctly:

ceph -s
bash

This command shows you the current status of your Ceph cluster. If you see HEALTH_OK, your pool has been set up correctly and your cluster is running reliably.

Tip

A Ceph cluster provides re­dun­dan­cy but does not replace regular backups. Use a Proxmox Backup Server for reliable data pro­tec­tion in Proxmox en­vi­ron­ments. It supports dedu­pli­cat­ed, encrypted backups of your VMs and con­tain­ers, even when they are stored on Ceph.

Step 8: Connect Ceph storage to Proxmox

Once your Ceph cluster has been set up and the first pool created, you need to connect your Ceph storage to Proxmox. This is so your virtual machines and con­tain­ers can use the storage. Proxmox uses the RBD protocol for this. The easiest way to connect the two is through the Proxmox web interface. Open the interface and go to Dat­a­cen­ter > Storage > Add > RBD (Ceph).

In the dialog that appears, enter the required settings for your Ceph cluster.

  • Under ID, enter a unique name for the new storage target.
  • In the Monitors field, enter the IP addresses of your Ceph MONs. These are the nodes running the monitor services. Separate multiple addresses with commas. For example, 10.0.0.11,10.0.0.12,10.0.0.13.
  • In the Pool field, enter the name of the Ceph pool you created earlier, e.g., cephpool.
  • For the user field, you can typically enter admin.
  • The keyring is filled in au­to­mat­i­cal­ly, as Proxmox retrieves the required au­then­ti­ca­tion key from your Ceph con­fig­u­ra­tion.
Note

If you prefer working from the command line, you can perform the same task with a single command:

pvesm add rbd ceph-storage --monhost <mon1,mon2,mon3> --pool cephpool --content images
bash

Replace <mon1,mon2,mon3> with the IP addresses of your monitor nodes.

Once added, the storage appears in the Proxmox interface. You can now select it as a target for virtual machines. Proxmox will then use Ceph as the un­der­ly­ing storage, and any VMs you create on it will au­to­mat­i­cal­ly benefit from the cluster’s re­dun­dan­cy and fault tolerance.

Tip

In­te­grat­ing Ceph is es­pe­cial­ly valuable if you plan to run a Ku­ber­netes cluster on Proxmox. Ceph can serve as per­sis­tent storage for Ku­ber­netes, giving your con­tain­ers the same re­dun­dan­cy and high avail­abil­i­ty as your virtual machines.

Go to Main Menu