RAID stands for Redundant Array of Independent disks , formally it called Redundant Array of Inexpensive disks.
RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or partitions thereof) to be combined into a single
device to hold (for example) a single filesystem. Some RAID levels include redundancy and so can survive some degree of device failure.
Linux Software RAID devices are implemented through the md (Multiple Devices) device driver.
Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.
To install software RAID we need mdadm utility, which holds mdadm package. mdadm – manage MD devices aka Linux Software RAID
Package installation :
yum install mdadm
mdadm has several major modes of operation:
Assemble:
Assemble the components of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array.
Build:
Build an array that doesn’t have per-device metadata (superblocks). For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate components have been requested. Because of this, the Build mode should only be used together with a complete under- standing of what you are doing.
Create:
Create a new array with per-device metadata (superblocks). Appropriate metadata is written to each device, and then the array comprising those devices is activated. A ’resync’
process is started to make sure that the array is consistent (e.g. both sides of a mirror contain the same data) but the content of the device is left otherwise untouched. The
array can be used as soon as it has been created. There is no need to wait for the initial resync to finish.
Follow or Monitor:
Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as only these have interesting state. RAID0 or
Linear never have missing, spare, or failed drives, so there is nothing to monitor.
Grow:
Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices and changing the num-
ber of active devices in Linear and RAID levels 0/1/4/5/6, changing the RAID level between 0, 1, 5, and 6, and between 0 and 10, changing the chunk size and layout for RAID
0,4,5,6,10 as well as adding or removing a write-intent bitmap.
Incremental Assembly:
Add a single device to an appropriate array. If the addition of the device makes the array runnable, the array will be started. This provides a convenient interface to a hot-
plug system. As each device is detected, mdadm has a chance to include it in some array as appropriate. Optionally, when the –fail flag is passed in we will remove the device
from any active array instead of adding it.
If a CONTAINER is passed to mdadm in this mode, then any arrays within that container will be assembled and started.
Manage:
This is for doing things to specific components of an array such as adding new spares and removing faulty devices.
Misc:
This is an ’everything else’ mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering opera-
tions.
Auto-detect:
This mode does not act on a specific device or array, but rather it requests the Linux Kernel to activate any auto-detected arrays.
We will see CREATE MODE in details:
Usage:
mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices
This usage will initialize a new md array, associate some devices with it, and activate the array. In order to create an array with some devices missing, use the special word ‘missing’ in place of the relevant device name.
Before devices are added, they are checked to see if they already contain raid superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%.
If any discrepancy is found, the user will be prompted for confirmation before the array is created. The presence of a ‘–run’ can override this caution.
If the –size option is given then only that many kilobytes of each device is used, no matter how big each device is. If no –size is given, the apparent size of the smallest drive given is used for raid level 1 and greater, and the full device is used for other levels.
Options that are valid with –create (-C) are:
–bitmap= : Create a bitmap for the array with the given filename or an internal bitmap is ‘internal’ is given
–chunk= -c : chunk size in kibibytes
–rounding= : rounding factor for linear array (==chunk size)
–level= -l : raid level: 0,1,4,5,6,10,linear,multipath and synonyms
–parity= -p : raid5/6 parity algorithm: {left,right}-{,a}symmetric
–layout= : same as –parity, for RAID10: [fno]NN
–raid-devices= -n : number of active devices in array
–spare-devices= -x: number of spare (eXtra) devices in initial array
–size= -z : Size (in K) of each drive in RAID1/4/5/6/10 – optional
–data-offset= : Space to leave between start of device and start of array data.
–force -f : Honour devices as listed on command line. Don’t insert a missing drive for RAID5.
–run -R : insist of running the array even if not all devices are present or some look odd.
–readonly -o : start the array readonly – not supported yet.
–name= -N : Textual name for array – max 32 characters
–bitmap-chunk= : bitmap chunksize in Kilobytes.
–delay= -d : bitmap update delay in seconds.
Raid 5:
In this example we are seeing raid 5 , Raid 5 requires minimum of 3 disks .
Lets assume we have three disks, as below sdb1, sdc1 and sdd1 and assigned raid partition type Linux raid auto(fd).
Device Boot Start End Blocks Id System /dev/sdb1 1 14 112423+fd Linux raid autodetect /dev/sdc1 15 28 112455 fd Linux raid autodetect /dev/sdd1 29 42 112455 fd Linux raid autodetect
Execute below command in order to create raid 5 md devices:
mdadm --create /dev/md0 --level=raid5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
How to check status of raid Array :
# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 222208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
mdadm --manage /dev/md0 --add /dev/sdb5
How to check Details of raid Array :
mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jan 13 05:02:04 2016 Raid Level : raid5 Array Size : 222208 (217.04 MiB 227.54 MB) Used Dev Size : 111104 (108.52 MiB 113.77 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Mon Jan 18 21:47:04 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : example:0 (local to host graphite) UUID : a3d21f60:b1ec289b:60607a48:75a77776 Events : 25
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdc1 3 8 19 2 active sync /dev/sdd1
How to stop RAID array :
mdadm --manage --stop /dev/md0
How to add spare/disks to RAID array :
mdadm --manage /dev/md0 --add /dev/sdb6
How to find spare disks in RAID array :
cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb6[4](S) sdb3[3] sdb2[1] sdb1[0] 222208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
How to find faulty disks in RAID array :
cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb6[4] sdb3[3](F) sdb2[1] sdb1[0] 222208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
How to replace faulty disks with newly added disks in RAID array :
mdadm --manage /dev/md0 --replace /dev/sdb3 --with /dev/sdb6 mdadm: Marked /dev/sdb3 (device 2 in /dev/md0) for replacement mdadm: Marked /dev/sdb6 in /dev/md0 as replacement for device 2
How to remove disks from RAID array:
mdadm --manage /dev/md0 --remove /dev/sdb3 mdadm: hot removed /dev/sdb3 from /dev/md0
Errors:
mdadm: /dev/sdb5 not large enough to join array
Please check size of the disk which you are adding to the existing array. Also check the partition file system type
MOST COMMENTED
Uncategorized
Ubuntu 16.04 No desktop only shows background wallpaper
Administration / DNS / Linux
Dig command examples
Virtualization
OpenVz(Kernel Base Open source Virtulization)
Uncategorized
Install Ansible on Linux
Puppet
Configuring puppet4 server agent and puppetdb on ubuntu16.04
Database
Installing postgresql on ubuntu 16.04
Puppet
opensource puppet4 installation on ubuntu16.04