Raid массив для линукс

RAID

Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple disk drive components (typically disk drives or partitions thereof) into a logical unit. Depending on the RAID implementation, this logical unit can be a file system or an additional transparent layer that can hold several partitions. Data is distributed across the drives in one of several ways called #RAID levels, depending on the level of redundancy and performance required. The RAID level chosen can thus prevent data loss in the event of a hard disk failure, increase performance or be a combination of both.

This article explains how to create/manage a software RAID array using mdadm.

RAID levels

Despite redundancy implied by most RAID levels, RAID does not guarantee that data is safe. A RAID will not protect data if there is a fire, the computer is stolen or multiple hard drives fail at once. Furthermore, installing a system with RAID is a complex process that may destroy data.

Standard RAID levels

There are many different levels of RAID; listed below are the most common.

RAID 0 Uses striping to combine disks. Even though it does not provide redundancy, it is still considered RAID. It does, however, provide a big speed benefit. If the speed increase is worth the possibility of data loss (for swap partition for example), choose this RAID level. On a server, RAID 1 and RAID 5 arrays are more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions. RAID 1 The most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap and temporary data. Please note that with a software implementation, the RAID 1 level is the only option for the boot partition, because bootloaders reading the boot partition do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition. RAID 5 Requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.

Читайте также:  Пингвин талисман линукс 4 буквы

Note: RAID 5 is a common choice due to its combination of speed and data redundancy. The caveat is that if one drive were to fail and another drive failed before that drive was replaced, all data will be lost. Furthermore, with modern disk sizes and expected unrecoverable read error (URE) rates on consumer disks, the rebuild of a 4TiB array is expected (i.e. higher than 50% chance) to have at least one URE. Because of this, RAID 5 is no longer advised by the storage industry.

RAID 6 Requires 4 or more physical drives, and provides the benefits of RAID 5 but with security against two drive failures. RAID 6 also uses striping, like RAID 5, but stores two distinct parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 6 can withstand the loss of two member disks. The robustness against unrecoverable read errors is somewhat better, because the array still has parity blocks when rebuilding from a single failed drive. However, given the overhead, RAID 6 is costly and in most settings RAID 10 in far2 layout (see below) provides better speed benefits and robustness, and is therefore preferred.

Nested RAID levels

RAID 1+0 RAID1+0 is a nested RAID that combines two of the standard levels of RAID to gain performance and additional redundancy. It is commonly referred to as RAID10, however, Linux MD RAID10 is slightly different from simple RAID layering, see below. RAID 10 RAID10 under Linux is built on the concepts of RAID1+0, however, it implements this as a single layer, with multiple possible layouts. The near X layout on Y disks repeats each chunk X times on Y/2 stripes, but does not need X to divide Y evenly. The chunks are placed on almost the same location on each disk they are mirrored on, hence the name. It can work with any number of disks, starting at 2. Near 2 on 2 disks is equivalent to RAID1, near 2 on 4 disks to RAID1+0. The far X layout on Y disks is designed to offer striped read performance on a mirrored array. It accomplishes this by dividing each disk in two sections, say front and back, and what is written to disk 1 front is mirrored in disk 2 back, and vice versa. This has the effect of being able to stripe sequential reads, which is where RAID0 and RAID5 get their performance from. The drawback is that sequential writing has a very slight performance penalty because of the distance the disk needs to seek to the other section of the disk to store the mirror. RAID10 in far 2 layout is, however, preferable to layered RAID1+0 and RAID5 whenever read speeds are of concern and availability / redundancy is crucial. However, it is still not a substitute for backups. See the wikipedia page for more information.

Читайте также:  Other linux x kernel

Warning: mdadm cannot reshape arrays in far X layouts which means once the array is created, you will not be able to mdadm —grow it. For example, if you have a 4x1TB RAID10 array and you want to switch to 2TB disks, your usable capacity will remain 2TB. For such use cases, stick to near X layouts.

RAID level comparison

RAID level Data redundancy Physical drive utilization Read performance Write performance Min drives
0 No 100% nX

Best; on par with RAID0 but redundant

Источник

Installing via the GUI

ubuntu_raid_00.png

Warning: the /boot filesystem cannot use any softRAID level other than 1 with the stock Ubuntu bootloader. If you want to use some other RAID level for most things, you’ll need to create separate partitions and make a RAID1 device for /boot.

Warning: this will remove all data on hard drives.

1. Select «Manual» as your partition method

ubuntu_raid_01.png

2. Select your hard drive, and agree to «Create a new empty partition table on this device ?»

ubuntu_raid_02.png ubuntu_raid_03.png

3. Select the «FREE SPACE» on the 1st drive then select «automatically partition the free space

ubuntu_raid_04.png ubuntu_raid_05.png

4. Ubuntu will create 2 partitions: / and swap, as shown below:

ubuntu_raid_06.png

5. On / partition select «bootable flag» and set it to «on»

ubuntu_raid_06.png

6. Repeat steps 2 to 5 for the other hard drive

Configuring the RAID

  1. Once you have completed your partitioning in the main «Partition Disks» page select «Configure Software RAID»
  2. Select «Yes»
  3. Select «Create new MD drive»
  4. Select RAID type: RAID 0, RAID 1, RAID 5 or RAID 6
  5. Number of devices. RAID 0 and 1 need 2 drives. 3 for RAID 5 and 4 for RAID 6.
  6. Number of spare devices. Enter 0 if you have no spare drive.
  7. select which partitions to use..
  8. Repeat steps 3 to 7 with each pair of partitions you have created.
  9. Filesystem and mount points will need to be specified for each RAID device. By default they are set to «do not use».
  10. Once done, select finish.

Boot Loader

In case your next HDD won’t boot then simply install Grub to another drive:

Boot from Degraded Disk

  • Additionally, this can be specified on the kernel boot line with the bootdegraded=[true|false]
  • You also can use #dpkg-reconfigure mdadm rather than CLI!

Verify the RAID

  1. shut-down your server
  2. remove the power and cable data of your first drive
  3. start your server and see if your server can boot from a degraded disk.

Troubleshooting

Swap space doesn’t come up, error message in dmesg

Provided the RAID is working fine this can be fixed with:

sudo update-initramfs -k all -u

Using the mdadm CLI

For those that want full control over the RAID configuration, the mdadm CLI provides this.

Checking the status of your RAID

Two useful commands to check the status are:

Personalities : [raid1] [raid6] [raid5] [raid4] md5 : active raid1 sda7[0] sdb7[1] 62685504 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 256896 blocks [2/2] [UU] md6 : active raid5 sdc1[0] sde1[2] sdd1[1] 976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

From this information you can see that the available personalities on this machine are «raid1, raid6, raid4, and raid5» which means this machine is set-up to use raid devices configured in a raid1, raid6, raid4 and raid5 configuration.

You can also see in the three example meta devices that there are two raid 1 mirrored meta devices. These are md0 and md5. You can see that md5 is a raid1 array and made up of disk /dev/sda partition 7, and /dev/sdb partition 7, containing 62685504 blocks, with 2 out of 2 disks available and both in sync.

The same can be said of md0 only it is smaller (you can see from the blocks parameter) and is made up of /dev/sda1 and /dev/sdb1.

md6 is different in that we can see it is a raid 5 array, striped across three disks. These are /dev/sdc1, /dev/sde1 and /dev/sdd1, with a 64k «chunk» size or write size. Algorithm 2 shows it is a write algorithm pattern, which is «left disk to right disk» writing across the array. You can see that all three disks are present and in sync.

sudo mdadm --query --detail /dev/md*

Replace * with the partition number.

Disk Array Operation

Note: You can add, remove disks, or set them as faulty without stopping an array.

Where /dev/md0 is the array device.

2. To remove a disk from an array:

sudo mdadm --remove /dev/md0 /dev/sda1

Where /dev/md0 is the array device and /dev/sda is the faulty disk.

sudo mdadm --add /dev/md0 /dev/sda1

Where /dev/md0 is the array device and /dev/sda is the new disk. Note: This is not the same as «growing» the array!

4. Start an Array, to reassemble (start) an array that was previously created:

ddadm will scan for defined arrays and start assembling it.

5. To track the status of the array as it gets started:

Known bugs

Ubuntu releases starting with 12.04 does not support nested raids like levels 1+0 or 5+0 due to an unresolved issue https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1171945

Resources

Installation/SoftwareRAID (последним исправлял пользователь c-50-152-255-153 2015-09-12 06:06:22)

The material on this wiki is available under a free license, see Copyright / License for details
You can contribute to this wiki, see Wiki Guide for details

Источник

Оцените статью
Adblock
detector