Preamble
We all know how easy it is to set up software RAID1 during install… but what if you had to do it afterward? Say for instance you only had one HDD at the time and decided later on you wanted to add a second drive and make it into a Mirror. Well, this tutorial will show you how to add a second drive of the same size (very important that they be the same size!) and create a RAID on Ubuntu 16 after installation.
For this example, I used Ubuntu 16.04.1 without LVM and without custom partitioning. And I have already added the second drive to the system. Here is what my drives looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
root@ubuntu:~# fdisk -l ... Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xf703a0f1 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 39845887 39843840 19G 83 Linux /dev/sda2 39847934 41940991 2093058 1022M 5 Extended /dev/sda5 39847936 41940991 2093056 1022M 82 Linux swap / Solaris Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes ... |
As you can see, my /dev/sdb drive is blank, however, it does not have to be. I will explain more on this as we go along.
Please note, some of the code you will see will go past the boundary of my blog, to view all the code in its entirety, when you mouse over the code a header bar will appear, just click the “Expand Code” icon to view everything.
Note: During this process, you may need access to the server via Keyboard and Monitor as during the boot process after creating the raid, it will ask you at boot time if you wish to boot from a degraded raid. You need to be able to press yes. So if you do not have a remove KVM type system connected to your server, you will need to be at the screen to type yes.
Getting Ready
Before we dive into the guts of this tutorial, you will need to install a few things.
1 |
root@ubuntu:~# apt-get install install initramfs-tools mdadm |
The above command will install the tools we need to create and manage our RAID1 as well as create a bootable initram image to boot from.
To prevent having to reboot, you can run the following commands to load the modules we will need as well:
1 2 3 4 5 6 7 |
modprobe linear modprobe multipath modprobe raid0 modprobe raid1 modprobe raid5 modprobe raid6 modprobe raid10 |
Now you should see that the system is RAID capable, but does not contain any RAID setting:
1 2 3 |
root@ubuntu:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: |
Now that our system is ready to begin. Let’s start by prepping the second hard drive (/dev/sdb)
Preparing the Hard Drive
We are going to leave /dev/sda alone for a while, after all, it has our operating system on it and we do not want to jeopardize that. So we will now start to prep /dev/sdb to be joined to the RAID.
To do this, we need it to have an exact copy of the partition table that /dev/sda has, to do this we can run the following command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
root@ubuntu:~# sfdisk -d /dev/sda | sfdisk --force /dev/sdb Checking that no-one is using this disk right now ... OK Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Created a new DOS disklabel with disk identifier 0xf703a0f1. Created a new partition 1 of type 'Linux' and of size 19 GiB. /dev/sdb2: Created a new partition 2 of type 'Extended' and of size 1022 MiB. /dev/sdb3: Created a new partition 5 of type 'Linux swap / Solaris' and of size 1022 MiB. /dev/sdb6: New situation: Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 39845887 39843840 19G 83 Linux /dev/sdb2 39847934 41940991 2093058 1022M 5 Extended /dev/sdb5 39847936 41940991 2093056 1022M 82 Linux swap / Solaris The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. |
Now if you compare the two drives using fdisk -l you will see that /dev/sda and /dev/sdb have the same table. I have truncated the results so you can see just the table and the sizes:
1 2 3 4 5 6 7 8 9 10 11 |
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 39845887 39843840 19G 83 Linux /dev/sda2 39847934 41940991 2093058 1022M 5 Extended /dev/sda5 39847936 41940991 2093056 1022M 82 Linux swap / Solaris Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 39845887 39843840 19G 83 Linux /dev/sdb2 39847934 41940991 2093058 1022M 5 Extended /dev/sdb5 39847936 41940991 2093056 1022M 82 Linux swap / Solaris |
Now we need to change the partition type for each of the partitions (sdb1 and sdb5). There are different ways to go about this, the quickest and easiest way is via the following command:
1 2 |
sfdisk --change-id /dev/sdb 1 fd sfdisk --change-id /dev/sdb 5 fd |
If you do an fdisk -l again you will see the partition types have now changed to Linux Raid Autodetect:
1 2 3 4 |
Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 39845887 39843840 19G fd Linux raid autodetect /dev/sdb2 39847934 41940991 2093058 1022M 5 Extended /dev/sdb5 39847936 41940991 2093056 1022M fd Linux raid autodetect |
To make sure there are no previous raid configurations on the disk (say you used this disk previously) you can run the following commands:
1 2 |
mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb5 |
If there were no previous settings you will get an error (shown bellow) if there was a previous configuration, there will be no output to the commands.
1 2 3 4 |
root@ubuntu:~# mdadm --zero-superblock /dev/sdb1 mdadm: Unrecognised md component device - /dev/sdb1 root@ubuntu:~# mdadm --zero-superblock /dev/sdb5 mdadm: Unrecognised md component device - /dev/sdb5 |
Creating RAID arrays
Now that we have our /dev/sdb prepped its time to create our RAID1 using the mdadm command. Note: When it asks you if you want to continue, type Y:
1 2 |
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5 |
You should see something similar to the bellow output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
root@ubuntu:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@ubuntu:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb5 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. |
You can verify the raid was started via the /proc/mdstat file:
1 2 3 4 5 6 7 8 9 |
root@ubuntu:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb5[1] 1045952 blocks super 1.2 [2/1] [_U] md0 : active raid1 sdb1[1] 19905536 blocks super 1.2 [2/1] [_U] unused devices: |
You will see that the raid drives have been created, however, they are not complete raids as they are missing a drive each as designated by the [_U] at the end of the lines.
Now the raid has been created, but they have not been formatted. We can do this via the following commands:
1 2 |
mkfs.ext4 /dev/md0 mkswap /dev/md1 |
We now ned to adjust the mdadm configuration which currently does not contain any raid information.
1 2 |
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm/mdadm.conf |
Now you can cat the file to see that it has the raid information added:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
root@ubuntu:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 10 Oct 2016 17:48:01 -0400 # by mkconf $Id$ ARRAY /dev/md/0 metadata=1.2 UUID=90d9bad3:ecf68a19:038e25d1:bfa9fc03 name=ubuntu:0 ARRAY /dev/md/1 metadata=1.2 UUID=11983f9c:86efc779:db535514:0bd1fe2b name=ubuntu:1 |
The two last lines show that the two raids have been successfully added to the configuration.
Now that we have the RAID created, we need to make sure they system sees them
Adjusting The System To RAID1
Let go ahead and mount the root raid partition /dev/md0
1 2 |
mkdir /mnt/md0 mount /dev/md0 /mnt/md0 |
And verify its mounted
1 |
mount |
At the bottom of the list you should see the entry we are looking for:
1 |
/dev/md0 on /mnt/md0 type ext4 (rw,relatime,data=ordered) |
Now we need to add the UUIDs of the raid partitions to the Fstab file, replacing the UUIDs of the /dev/sda entries:
1 2 3 |
root@ubuntu:~# blkid /dev/md0 /dev/md1 /dev/md0: UUID="ac289ad2-f6cc-4ec9-8725-b489cd7046b1" TYPE="ext4" /dev/md1: UUID="5d248935-5cef-4b03-be20-a6b6450eba58" TYPE="swap" |
Now that we have the UUIDs, replace them in the fstab file:
1 2 3 4 5 6 7 8 9 10 11 12 |
root@ubuntu:~# vi /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # # / was on /dev/sda1 during installation UUID=ac289ad2-f6cc-4ec9-8725-b489cd7046b1 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=5d248935-5cef-4b03-be20-a6b6450eba58 none swap sw 0 0 |
Make sure you match the type, EXT4 to EXT4 and Swap to Swap.
Next we need to replace /dev/sda1 with /dev/md0 in /etc/mtab:
1 |
sed -e "s/dev\/sda1/dev\/md0/" -i /etc/mtab |
and to verify
1 |
cat /etc/mtab |
Look to make sure there are no entried for sda1, the following outbut has been truncated to show only the modifed entries:
1 2 3 4 5 |
root@ubuntu:~# cat /etc/mtab ... /dev/md0 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 ... /dev/md0 /mnt/md0 ext4 rw,relatime,data=ordered 0 0 |
Now we have to set up the boot loader to boot to the raid drive.
Setup the GRUB2 boot loader
In order to boot properly during the raid setup, we will need to create a temporary grub config file. FOr this you will need to know what your kernel verion is.
1 2 |
root@ubuntu:~# uname -r 4.4.0-31-generic |
Now we will copy the custom file and edit it to add our temporary configuration:
1 2 |
cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup vi /etc/grub.d/09_swraid1_setup |
Add the following at the bottom of the file, making sure to replace all instance of the kernel version with the version you found in the previous command:
1 2 3 4 5 6 7 8 |
menuentry 'Ubuntu, with Linux 4.4.0-31-generic RAID Temp' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod mdraid1x insmod ext2 set root='(md/0)' linux /boot/vmlinuz-4.4.0-31-generic root=/dev/md0 ro quiet initrd /boot/initrd.img-4.4.0-31-generic } |
Now you need to update grub and modify the ramdisk for the new configuration:
1 2 |
update-grub update-initramfs -u |
Your output should look similar to this:
1 2 3 4 5 6 7 8 9 10 |
root@ubuntu:~# update-grub Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.4.0-42-generic Found initrd image: /boot/initrd.img-4.4.0-42-generic Found linux image: /boot/vmlinuz-4.4.0-31-generic Found initrd image: /boot/initrd.img-4.4.0-31-generic done root@ubuntu:~# update-initramfs -u update-initramfs: Generating /boot/initrd.img-4.4.0-42-generic root@ubuntu:~# |
Copy files to the new disk
Now we need to copy all the files from the file system root to the raid partition.
1 |
cp -dpRx / /mnt/md0 |
If you are doing this remotely via SSH, you can add the verbose option to the command to see that the process is still running. I would not recommend doing this at the server as the refresh rate on the monitor will slow down the process.
1 |
cp -dpRxv / /mnt/md0 |
Preparing GRUB2
It’s now time to reboot to the Raid partition, but before we do that we ned to make sure both drives have grub installed:
1 2 3 4 |
root@ubuntu:~# grub-install /dev/sda Installation finished. No error reported. root@ubuntu:~# grub-install /dev/sdb Installation finished. No error reported. |
Now you will need to reboot. For this part, you should be at the server as it may ask you if you wish to boot from a degraded RAID, you will need to type yes for it to complete the boot process so that it can be accessible remotely again.
1 |
root@ubuntu:~# reboot |
1 |
linux /boot/vmlinuz-4.4.0-31-generic root=/dev/md0 ro quiet |
needed to be changed to
1 |
linux /boot/vmlinuz-4.4.0-31-generic root=/dev/md126 ro quiet |
Then you need to run “update-grub”, “update-initramfs -u”, and install grub on /dev/sda and sdb just to be sure. WHen you boot you can see the changes Ubuntu did when you cat /proc/mdstat
1 2 3 4 5 6 7 8 9 |
root@ubuntu:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdb1[1] 19905536 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb5[1] 1045952 blocks super 1.2 [2/1] [_U] unused devices: |
If your raid partitions changed, for the rest of the tutorial, swap md0 and md1 with the appropriate changes.
Preparing sda for the raid
You can verify that you are now on the raid by typing “df -h”
1 2 3 4 5 6 7 8 9 10 |
root@ubuntu:~# df -h Filesystem Size Used Avail Use% Mounted on udev 469M 0 469M 0% /dev tmpfs 98M 4.6M 93M 5% /run /dev/md126 19G 1.7G 16G 10% / tmpfs 488M 0 488M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 488M 0 488M 0% /sys/fs/cgroup tmpfs 100K 0 100K 0% /run/lxcfs/controllers tmpfs 98M 0 98M 0% /run/user/0 |
You will see (in my case) that the root is not on /dev/sda1 but on /dev/md126
Now we are going to change the paritition types of sda1 and sda5 just like we did for sdb1 and sdb5
1 2 |
sfdisk --change-id /dev/sda 1 fd sfdisk --change-id /dev/sda 5 fd |
If you do an “fdisk -l” you will see the partition types have changed
1 2 3 4 |
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 39845887 39843840 19G fd Linux raid autodetect /dev/sda2 39847934 41940991 2093058 1022M 5 Extended /dev/sda5 39847936 41940991 2093056 1022M fd Linux raid autodetect |
Now we will add the drives tot he raid partions (in my case md126 and md127)
1 2 |
mdadm --add /dev/md126 /dev/sda1 mdadm --add /dev/md127 /dev/sda5 |
If you “cat /proc/mdstat” you will see that the drive is now syncing:
1 2 3 4 5 6 7 8 9 10 11 |
root@ubuntu:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sda1[2] sdb1[1] 19905536 blocks super 1.2 [2/1] [_U] [>....................] recovery = 0.7% (156416/19905536) finish=33.6min speed=9776K/sec md127 : active raid1 sda5[2] sdb5[1] 1045952 blocks super 1.2 [2/1] [_U] resync=DELAYED unused devices: |
Now we are going to make sure that mdadm.conf has all the right changes:
1 2 |
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf mdadm --examine --scan >> /etc/mdadm/mdadm.conf |
! Before continuing, make sure your raid is fully rebuilt – this may take a while !
Cleaning up GRUB
Now its time to do some house cleaning. We need to remove the temporary config, update grum and the initramfs files and re-write grub to /dev/sda and sdb.
1 2 3 4 5 |
rm -f /etc/grub.d/09_swraid1_setup update-grub update-initramfs -u grub-install /dev/sda grub-install /dev/sdb |
Next, Reboot and you should be on a fully Raided system in a RADI1 configuration.
Thank you, for the tutorial.
Fantastic tutorial!!!!
I found that if, instead of using /dev/md0, etc., I specify /dev/md/whateverIwant, I can use intuitive names and don’t have to worry about what numbers it uses under /dev/mdXXX. It creates both sets of device files in any case, but this way I can use intuitive names.
Raid1 configured in half an hour!!! 😀
Thanks Matthew! Perfect tutorial!!!
Thanks man! You saved my day 🙂
It’s worth mentioning that instead of:
sfdisk –change-id /dev/sdb 1 fd
I had to use the following command along with a GUID to change the partition type:
sfdisk –part-type /dev/sdb 1 A19D880F-05FC-4D3B-A006-743F0F84911E
Hi.
That’s only true in case you have GPT partition style!
The BEST one and it is working. Thank you so much