In this article we are going to learn How to configure software RAID 1 (Disk Mirroring) using mdadm in Linux. Previously one of my article I have already explained Steps for configuration of Software RAID 5 in Linux. RAID 1 is also referred as Disk Mirroring. We need minimum Two Physical Hard disks or Partition’s to configure Software RAID 1 in Linux.
How Software RAID 1 Works
As we all are know that RAID 1 is also known as disk mirroring. Disk Mirroring means it’s stores the same data on both Hard disks and out of Two Harddisk’s user will get only one harddisk to store Data. For Example Suppose you have configured Software RAID 1 using two hard disks of size 1 TB each, Out of two 1 TB hard disks user will get only one hard disk to store data. This is one of the Dis-Advantage of Software RAID 1. In case one Harddisk got faulty then you will able to get all your data from second hard disk due to mirroring technology. When you replace new harddisk in place of faulty one, RAID 1 will automatically sync the data to new harddisk from available working harddisk.
Follow the below steps to configure Software RAID 1
Step : 1 Install mdadm Package
To configure software RAID 1 in Linux we need a tool called mdadm. Normally it install’s with Operating System installation but if it’s not installed then you can install it using yum command. refer the command below.
[root@localhost ~]# yum -y install mdadm # Install mdadm Package Loaded plugins: fastestmirror, refresh-packagekit, security Determining fastest mirrors * base: mirror.vbctv.in * extras: mirror.vbctv.in * updates: mirrors.viethosting.com base | 3.7 kB 00:00 extras | 3.4 kB 00:00 extras/primary_db | 29 kB 00:00 updates | 3.4 kB 00:00 updates/primary_db | 1.4 MB 00:31 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mdadm.x86_64 0:3.2.6-7.el6 will be updated ---> Package mdadm.x86_64 0:3.3.4-8.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved =================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================== Updating: mdadm x86_64 3.3.4-8.el6 base 348 k Transaction Summary =================================================================================================================================== Upgrade 1 Package(s) Total download size: 348 k Downloading Packages: mdadm-3.3.4-8.el6.x86_64.rpm | 348 kB 00:00 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 Importing GPG key 0xC105B9DE: Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key@centos.org> Package: centos-release-6-5.el6.centos.11.1.x86_64 (@anaconda-CentOS-201311272149.x86_64/6.5) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : mdadm-3.3.4-8.el6.x86_64 1/2 Cleanup : mdadm-3.2.6-7.el6.x86_64 2/2 Verifying : mdadm-3.3.4-8.el6.x86_64 1/2 Verifying : mdadm-3.2.6-7.el6.x86_64 2/2 Updated: mdadm.x86_64 0:3.3.4-8.el6 Complete!
Step : 2 Create Partition’s for Software RAID 1
Let’s start the configuration of Software RAID 1. For that we need two hard disk’s of same size. Here I am going to configure RAID 1 in my Virtual Machine. I have two Virtual Harddisks i.e. /dev/sdb and /dev/sdc. Refer the sample output below.
[root@localhost ~]# fdisk -l # List the available Disks and Partitions Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000817a9 Device Boot Start End Blocks Id System /dev/sda1 * 1 39 307200 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 39 2350 18566144 83 Linux /dev/sda3 2350 2611 2097152 82 Linux swap / Solaris Disk /dev/sdb: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Now create Partitions one by one by choosing each Harddisk and change the Partition ID of both partition’s for Software RAID. The partition id of Software RAID is “fd“.
Creating Partition using My First Harddisk i.e. /dev/sdb.
[root@localhost ~]# fdisk /dev/sdb # Create Partition Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xed18e1c0. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +2G Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd # Change the Partition ID for Software RAID Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Creating Partition using My Second Harddisk i.e. /dev/sdc.
[root@localhost ~]# fdisk /dev/sdc # Create Partition Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xd8cf4993. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-391, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-391, default 391): +2G Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Now List the disk’s by using fdisk -l command to confirm if Partition ID changed for Software RAID or not. Refer the Sample Output below.
Disk /dev/sdb: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed18e1c0 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2104483+ fd Linux raid autodetect Disk /dev/sdc: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd8cf4993 Device Boot Start End Blocks Id System /dev/sdc1 1 262 2104483+ fd Linux raid autodetect
After create the Partition run the partprobe command to update the partition’s in kernel without restart the system.
[root@localhost ~]# partprobe /dev/sdb [root@localhost ~]# partprobe /dev/sdc
Also Read – How To Configure Raid 5 (Software Raid) In Linux Using Mdadm
Step : 3 Create Software RAID 1 Partition
Now start the Software RAID 1 array using mdadm command. Refer the command below.
[root@localhost ~]# mdadm --create /dev/md0 --level=mirror --raid-device=2 /dev/sdb1 /dev/sdc1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? yes mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
As you can see above Software RAID 1 array started successfully. To check the details of Software RAID 1 partition refer the below command.
[root@localhost ~]# mdadm --detail /dev/md0 # Checking the RAID 1 Partition Details /dev/md0: Version : 1.2 Creation Time : Fri Jun 9 19:01:26 2017 Raid Level : raid1 Array Size : 2102400 (2.01 GiB 2.15 GB) Used Dev Size : 2102400 (2.01 GiB 2.15 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Fri Jun 9 19:01:36 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : aa213e39:8bed3818:9ce1061f:e9cccdf8 Events : 17 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
You can also check /proc/mdstat file to check the RAID 1 partition details.
[root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 2102400 blocks super 1.2 [2/2] [UU] unused devices:
Step : 4 Format the RAID 1 Partition
So we have configured the Software RAID 1 and get the RAID partition i.e. /dev/md0 successfully. Now we have to format the partition to create a file system like we do with normal partitions. We can do so using below command. Here I am formatting the RAID 1 partition using ext4 file system.
[root@localhost ~]# mkfs.ext4 /dev/md0 # Format the Software RAID 1 Partition mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 131648 inodes, 525600 blocks 26280 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=541065216 17 block groups 32768 blocks per group, 32768 fragments per group 7744 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Also Read – How to Increase Existing Software Raid 5 Storage Capacity In Linux
Step : 5 Mount the Software RAID 1 Partition
Now to mount the partition for data storing we need to create a directory.
[root@localhost ~]# mkdir /raid
So let’s go ahead and mount the partition manually. Refer below command to do so.
[root@localhost ~]# mount /dev/md0 /raid/
We can check mounted devices by using df -h command.
[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 228K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/md0 2.0G 68M 1.9G 4% /raid
For Permanent mounting just edit the /etc/fstab file and enter the below highlighted line.
[root@localhost ~]# nano /etc/fstab /dev/md0 /raid ext4 defaults 0 0
Now refresh all mount points by using mount -a command and check all mount points using df -h command.
[root@localhost ~]# mount -a # Refresh all Mount points [root@localhost ~]# df -h # Check all Mount points Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 228K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/md0 2.0G 68M 1.9G 4% /raid
To save all configurations i.e. /dev/md0 (Software RAID 1 partition) refer the below command.
[root@localhost ~]# mdadm --detail --scan --verbose >> /etc/mdadm.conf
you can confirm the saved configuration in /etc/mdadm.conf file
[root@localhost ~]# cat /etc/mdadm.conf ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=localhost.localdomain:0 UUID=aa213e39:8bed3818:9ce1061f:e9cccdf8 devices=/dev/sdb1,/dev/sdc1
So we did all configuration required by Software RAID 1 and mounted the RAID 1 partition and now we can store data on it. for example I created some file and directory in /raid directory.
[root@localhost ~]# ls /raid/database/ file1.txt file2.txt file3.txt file4.txt file5.txt
Now let’s do some experiment. Let’s fail one drive and then check what is the status of the available data. To fail a drive you can use the below command. Here I am making faulty to my /dev/sdb1 partition.
[root@localhost database]# mdadm /dev/md0 -f /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md0
After fail the drive if you check the RAID 1 drive details you will see something like as shown below.
[root@localhost database]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri Jun 9 19:01:26 2017 Raid Level : raid1 Array Size : 2102400 (2.01 GiB 2.15 GB) Used Dev Size : 2102400 (2.01 GiB 2.15 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Fri Jun 9 19:21:19 2017 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : aa213e39:8bed3818:9ce1061f:e9cccdf8 Events : 21 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 33 1 active sync /dev/sdc1 0 8 17 - faulty /dev/sdb1
Now to remove the faulty drive you can use the below command.
[root@localhost ~]# mdadm /dev/md0 -r /dev/sdb1 mdadm: hot removed /dev/sdb1 from /dev/md0 # Check the details [root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri Jun 9 19:01:26 2017 Raid Level : raid1 Array Size : 2102400 (2.01 GiB 2.15 GB) Used Dev Size : 2102400 (2.01 GiB 2.15 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Fri Jun 9 19:23:40 2017 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : aa213e39:8bed3818:9ce1061f:e9cccdf8 Events : 28 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 33 1 active sync /dev/sdc1
So out of two Harddisks one harddisk got faulty and we removed that hard disk from Software RAID 1 Partition.
Now let’s check two thing’s i.e. Mount Point and Data.
Confirming the Mountpoint :
[root@localhost Desktop]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 76K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/md0 2.0G 68M 1.9G 4% /raid
Confirming the Data :
[root@localhost ~]# ls /raid/database/ file1.txt file2.txt file3.txt file4.txt file5.txt
As you can see above both Mount point and data is safe. You can use the below command to add new hard disk in place faulty one.
[root@localhost ~]# mdadm /dev/md0 -a /dev/sdd # Add new Harddisk to RAID 1
That’s all, In this article, we have explained the How to Configure Software RAID 1 (Disk Mirroring) Using Mdadm in Linux. I hope you enjoy this article. If you like this article, then just share it. If you have any questions about this article, please comment.