<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://kevininscoe.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Linux_-_Software_RAID_and_Mirrors</id>
	<title>Linux - Software RAID and Mirrors - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://kevininscoe.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Linux_-_Software_RAID_and_Mirrors"/>
	<link rel="alternate" type="text/html" href="https://kevininscoe.com/wiki/index.php?title=Linux_-_Software_RAID_and_Mirrors&amp;action=history"/>
	<updated>2026-05-15T20:07:22Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.40.1</generator>
	<entry>
		<id>https://kevininscoe.com/wiki/index.php?title=Linux_-_Software_RAID_and_Mirrors&amp;diff=625&amp;oldid=prev</id>
		<title>Kinscoe: Created page with &quot;==Summary of RAID software in Linux==  There are several options available to Linux: these are all software based solutions as opposed to hardware RAID we used to see in physi...&quot;</title>
		<link rel="alternate" type="text/html" href="https://kevininscoe.com/wiki/index.php?title=Linux_-_Software_RAID_and_Mirrors&amp;diff=625&amp;oldid=prev"/>
		<updated>2018-03-29T16:07:55Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Summary of RAID software in Linux==  There are several options available to Linux: these are all software based solutions as opposed to hardware RAID we used to see in physi...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Summary of RAID software in Linux==&lt;br /&gt;
&lt;br /&gt;
There are several options available to Linux: these are all software based solutions as opposed to hardware RAID we used to see in physical disk controllers or Storage Area Networks.&lt;br /&gt;
&lt;br /&gt;
In Linux the three main commands for creating a software RAID between two or more volumes are:  lvm, mdadm or dmraid. dmraid ([https://en.wikipedia.org/wiki/Device_mapper Device Mapper]) is limited to ATA type disks (commonly known as &amp;quot;[http://superuser.com/questions/721795/how-fake-raid-communicates-with-operating-systemlinux/721796#721796 fakeRAID&amp;quot;]) so we will not use that here. Mirroring with [http://hydra.geht.net/tino/howto/linux/lvm2/mirror/ LVM2 (which ships with RHEL 7) is considered broken] when it come to mirroring so we will focus here on [http://neil.brown.name/blog/mdadm mdadm].&lt;br /&gt;
&lt;br /&gt;
Some notes:&lt;br /&gt;
&lt;br /&gt;
https://jreypo.wordpress.com/tag/device-mapper/&lt;br /&gt;
&lt;br /&gt;
http://stackoverflow.com/questions/23164384/what-is-the-difference-between-dm-and-md-in-linux-kernel&lt;br /&gt;
&lt;br /&gt;
The Linux RAID FAQ:  https://raid.wiki.kernel.org/index.php/Linux_Raid&lt;br /&gt;
&lt;br /&gt;
RAID on AWS instance: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html&lt;br /&gt;
&lt;br /&gt;
==mdadm mirrors==&lt;br /&gt;
&lt;br /&gt;
You will likely need to install the madam softwware as it is not installed by default:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;yum install mdadm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID&lt;br /&gt;
&lt;br /&gt;
An example of mirroring two AWS EBS volumes in an instance:&lt;br /&gt;
&lt;br /&gt;
1. Create two same sized EBS volumes and attach to the instance as known devices say (/dev/xvde and /dev/xvds). Note if you are re-using a currently attached EBS volume drop to Step 4 below.&lt;br /&gt;
&lt;br /&gt;
2. Verify the volumes are attached to your instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo lsblk  /dev/xvde&lt;br /&gt;
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT&lt;br /&gt;
xvde 202:64   0  34G  0 disk &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You do not need to fdisk or partition these two volumes. This will be done through the mirror volume.&lt;br /&gt;
&lt;br /&gt;
4. If this volume has been used before we need to erase it. This can take quite a long time. It is also known as &amp;quot;pre-warming&amp;quot;. You may want to consier removing and creating a new EBS volume from sratch it would be quicker. &lt;br /&gt;
&lt;br /&gt;
To erase an already connected EBS volume we will use output from fdisk to supply parameters to the dd command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo fdisk -l /dev/xvdf&lt;br /&gt;
&lt;br /&gt;
Disk /dev/xvdf: 34.4 GB, 34359738368 bytes, 67108864 sectors&lt;br /&gt;
Units = sectors of 1 * 512 = 512 bytes&lt;br /&gt;
Sector size (logical/physical): 512 bytes / 512 bytes&lt;br /&gt;
I/O size (minimum/optimal): 512 bytes / 512 bytes&lt;br /&gt;
&lt;br /&gt;
$ sudo dd if=/dev/zero of=/dev/xvdf bs=512&lt;br /&gt;
20428009+0 records in&lt;br /&gt;
20428009+0 records out&lt;br /&gt;
10459140608 bytes (10 GB) copied, 1356.81 s, 7.7 MB/s&lt;br /&gt;
&lt;br /&gt;
real    22m36.817s&lt;br /&gt;
user    0m0.325s&lt;br /&gt;
sys     0m59.720s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above it took 23 minutes just to zero out 10GB.&lt;br /&gt;
&lt;br /&gt;
5. Run this command to create mirrored volume of both EBS volumes. The resulting volume will be /dev/md/oracle0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo mdadm --create --auto=mdp --verbose /dev/md/oracle0 --name=oracle0 --level=mirror --raid-devices=2 /dev/xvde /dev/xvds &lt;br /&gt;
mdadm: /dev/xvde appears to be part of a raid array:&lt;br /&gt;
       level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969&lt;br /&gt;
mdadm: partition table exists on /dev/xvde but will be lost or&lt;br /&gt;
       meaningless after creating array&lt;br /&gt;
mdadm: Note: this array has metadata at the start and&lt;br /&gt;
    may not be suitable as a boot device.  If you plan to&lt;br /&gt;
    store &amp;#039;/boot&amp;#039; on this device please ensure that&lt;br /&gt;
    your boot-loader understands md/v1.x metadata, or use&lt;br /&gt;
    --metadata=0.90&lt;br /&gt;
mdadm: /dev/xvds appears to be part of a raid array:&lt;br /&gt;
       level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969&lt;br /&gt;
mdadm: partition table exists on /dev/xvds but will be lost or&lt;br /&gt;
       meaningless after creating array&lt;br /&gt;
mdadm: size set to 35618816K&lt;br /&gt;
Continue creating array? y&lt;br /&gt;
mdadm: Defaulting to version 1.2 metadata&lt;br /&gt;
mdadm: array /dev/md/oracle0 started.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Partition your mirror device:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo fdisk /dev/md/oracle0&lt;br /&gt;
Welcome to fdisk (util-linux 2.23.2).&lt;br /&gt;
&lt;br /&gt;
Changes will remain in memory only, until you decide to write them.&lt;br /&gt;
Be careful before using the write command.&lt;br /&gt;
&lt;br /&gt;
Device does not contain a recognized partition table&lt;br /&gt;
Building a new DOS disklabel with disk identifier 0x70a07f5e.&lt;br /&gt;
&lt;br /&gt;
Command (m for help): p&lt;br /&gt;
&lt;br /&gt;
Disk /dev/md/oracle0: 36.5 GB, 36473667584 bytes, 71237632 sectors&lt;br /&gt;
Units = sectors of 1 * 512 = 512 bytes&lt;br /&gt;
Sector size (logical/physical): 512 bytes / 512 bytes&lt;br /&gt;
I/O size (minimum/optimal): 512 bytes / 512 bytes&lt;br /&gt;
Disk label type: dos&lt;br /&gt;
Disk identifier: 0x70a07f5e&lt;br /&gt;
&lt;br /&gt;
           Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
&lt;br /&gt;
Command (m for help): n&lt;br /&gt;
Partition type:&lt;br /&gt;
   p   primary (0 primary, 0 extended, 4 free)&lt;br /&gt;
   e   extended&lt;br /&gt;
Select (default p): p&lt;br /&gt;
Partition number (1-4, default 1): 1&lt;br /&gt;
First sector (2048-71237631, default 2048): &lt;br /&gt;
Using default value 2048&lt;br /&gt;
Last sector, +sectors or +size{K,M,G} (2048-71237631, default 71237631): &lt;br /&gt;
Using default value 71237631&lt;br /&gt;
Partition 1 of type Linux and of size 34 GiB is set&lt;br /&gt;
&lt;br /&gt;
Command (m for help): w&lt;br /&gt;
The partition table has been altered!&lt;br /&gt;
&lt;br /&gt;
Calling ioctl() to re-read partition table.&lt;br /&gt;
Syncing disks.&lt;br /&gt;
&lt;br /&gt;
$ sudo fdisk -l /dev/md/oracle0&lt;br /&gt;
&lt;br /&gt;
Disk /dev/md/oracle0: 36.5 GB, 36473667584 bytes, 71237632 sectors&lt;br /&gt;
Units = sectors of 1 * 512 = 512 bytes&lt;br /&gt;
Sector size (logical/physical): 512 bytes / 512 bytes&lt;br /&gt;
I/O size (minimum/optimal): 512 bytes / 512 bytes&lt;br /&gt;
Disk label type: dos&lt;br /&gt;
Disk identifier: 0x70a07f5e&lt;br /&gt;
&lt;br /&gt;
           Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
/dev/md/oracle0p1            2048    71237631    35617792   83  Linux&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Now format the filesystem (ext4):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo mkfs.ext4 -L /oracle /dev/md/oracle0p1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Check on the raid volume:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo mdadm --detail /dev/md/oracle0&lt;br /&gt;
/dev/md/oracle0:&lt;br /&gt;
        Version : 1.2&lt;br /&gt;
  Creation Time : Mon Feb 22 19:14:59 2016&lt;br /&gt;
     Raid Level : raid1&lt;br /&gt;
     Array Size : 35618816 (33.97 GiB 36.47 GB)&lt;br /&gt;
  Used Dev Size : 35618816 (33.97 GiB 36.47 GB)&lt;br /&gt;
   Raid Devices : 2&lt;br /&gt;
  Total Devices : 2&lt;br /&gt;
    Persistence : Superblock is persistent&lt;br /&gt;
&lt;br /&gt;
    Update Time : Mon Feb 22 19:24:56 2016&lt;br /&gt;
          State : active &lt;br /&gt;
 Active Devices : 2&lt;br /&gt;
Working Devices : 2&lt;br /&gt;
 Failed Devices : 0&lt;br /&gt;
  Spare Devices : 0&lt;br /&gt;
&lt;br /&gt;
           Name : awsmdmqld02.hmco.com:oracle0  (local to host awsmdmqld02.hmco.com)&lt;br /&gt;
           UUID : de418038:1019361b:90a24022:83f16131&lt;br /&gt;
         Events : 18&lt;br /&gt;
&lt;br /&gt;
    Number   Major   Minor   RaidDevice State&lt;br /&gt;
       0     202       64        0      active sync   /dev/xvde&lt;br /&gt;
       1     202     4608        1      active sync   /dev/xvds&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
9. Update the madam.conf file for reboots.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo mdadm --examine --scan --config=mdadm.conf &amp;gt;&amp;gt; /etc/mdadm.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Monitoring the RAID==&lt;br /&gt;
&lt;br /&gt;
You can always take a look at /proc/mdstat. You can also use mdadm command to check the RAID array status.&lt;br /&gt;
&lt;br /&gt;
mdadm --detail we looked at above.&lt;br /&gt;
&lt;br /&gt;
This command will show spare and failed disks.&lt;br /&gt;
&lt;br /&gt;
==Notes of interest==&lt;br /&gt;
&lt;br /&gt;
Cheatsheet: http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/&lt;br /&gt;
&lt;br /&gt;
HOWTO: http://tldp.org/HOWTO/Software-RAID-HOWTO-6.html&lt;br /&gt;
&lt;br /&gt;
http://edoceo.com/howto/mdadm-raid1&lt;/div&gt;</summary>
		<author><name>Kinscoe</name></author>
	</entry>
</feed>