Rebuilding Software RAID (Linux)
This article explains how to rebuild a software RAID after replacing a defective hard disk.
Please Note
After the hard drive has been replaced, it may be recognized as sdc. This always happens with a hot-swap exchange. The only thing that helps here is a reboot so that the hard disk is recognized as sda or sdb again.
Example Scenario
These instructions are based on the following configuration:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[0] sdb1[1]
4194240 blocks [2/2] [UU]
md3 : active raid1 sda3[0] sdb3[1]
1458846016 blocks [2/2] [UU]
There are 2 arrays:
/dev/md1 as /
/dev/md3 for the log. Partitions /var, /usr, and /home.
Typically, there are two swap partitions (sda2 and sdb2) which are not part of the RAID.
Restoring RAID
The procedure to follow depends on whether hard disk 1 (sda) or hard disk 2 (sdb) was replaced:
If Hard Disk 1 (sda) Was Replaced
If hard disk 1 (sda) was replaced, you must check whether it was recognized correctly. You may need to reboot. Then, boot the server into the rescue system and perform the steps listed below.
First, copy the partition tables to the new (empty) hard disk:
[root@host ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda
(You may need to use the --force option).Add the partitions to the RAID:
[root@host ~]# mdadm /dev/md1 -a /dev/sda1
[root@host ~]# mdadm /dev/md3 -a /dev/sda3
You can now follow the rebuild of the RAID with cat /proc/mdstat.Then mount the partitions var, usr and home:
[root@host ~]# mount /dev/md1 /mnt
[root@host ~]# mount /dev/mapper/vg00-var /mnt/var
[root@host ~]# mount /dev/mapper/vg00-usr /mnt/usr
[root@host ~]# mount /dev/mapper/vg00-home /mnt/homeSo that Grub can be installed later without errors, mount proc, sys and dev:
[root@host ~]# mount -o bind /proc /mnt/proc
[root@host ~]# mount -o bind /sys /mnt/sys
[root@host ~]# mount -o bind /dev /mnt/devAfter mounting the partitions, jump into the chroot environment and install the grub bootloader:
[root@host ~]# chroot /mnt
[root@host ~]# grub-install /dev/sdaExit Chroot with Exit and unmount all disks:
[root@host ~]# umount -a
Wait until the rebuild process is finished and then boot the server back into the normal system.Finally, you must now enable the swap partition using the following commands:
[root@host ~]# mkswap /dev/sda2
[root@host ~]# swapon -a
If Hard Disk 2 (sdb) Was Replaced
If hard disk 2 (sdb) has been replaced, proceed as follows:
Perform a reboot so that hard disk 2 (sdb) is displayed.
In the local system, copy the partition tables to the new (empty) disk:
[root@host ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
(You may need to use the --force option).Add the partitions to the RAID:
[root@host ~]# mdadm /dev/md1 -a /dev/sdb1
[root@host ~]# mdadm /dev/md3 -a /dev/sdb3
You can now follow the rebuild of the RAID with cat /proc/mdstat.Finally, you must now enable the swap partition using the following commands:
[root@host ~]# mkswap /dev/sdb2