linux + How to verify hardware or software RAID?
Is it possible to know by Linux command if my RAID is HW or SW RAID? for example in my machine — BLADE from dell MANUFACTURE by /ptoc/mdstat seems my RAID is «SW RAID» ?
cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdr2[1] sdq2[0] 390054912 blocks super 1.2 [2/2] [UU] bitmap: 1/3 pages [4KB], 65536KB chunk md0 : active raid1 sdr1[1] sdq1[0] 524224 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
lsblk vg00-lv_root 253:0 0 50G 0 lvm / └─md1 9:1 0 372G 0 raid1 ├─sdq2 65:2 0 372.1G 0 part │ └─sdq 65:0 0 372.6G 0 disk └─sdr2 65:18 0 372.1G 0 part └─sdr 65:16 0 372.6G 0 disk vg00-lv_swap 253:1 0 16G 0 lvm [SWAP] └─md1 9:1 0 372G 0 raid1 ├─sdq2 65:2 0 372.1G 0 part │ └─sdq 65:0 0 372.6G 0 disk └─sdr2 65:18 0 372.1G 0 part └─sdr 65:16 0 372.6G 0 disk vg00-lv_var 253:2 0 100G 0 lvm /var └─md1 9:1 0 372G 0 raid1 ├─sdq2 65:2 0 372.1G 0 part │ └─sdq 65:0 0 372.6G 0 disk └─sdr2 65:18 0 372.1G 0 part └─sdr 65:16 0 372.6G 0 disk mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Mon Jun 26 13:14:03 2017 Raid Level : raid1 Array Size : 390054912 (371.99 GiB 399.42 GB) Used Dev Size : 390054912 (371.99 GiB 399.42 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Jul 9 12:45:29 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : localhost:1 UUID : b13eee32:f5894d0c:23aaf608:a67290c9 Events : 605 Number Major Minor RaidDevice State 0 65 2 0 active sync /dev/sdq2 1 65 18 1 active sync /dev/sdr2
RAID
Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple disk drive components (typically disk drives or partitions thereof) into a logical unit. Depending on the RAID implementation, this logical unit can be a file system or an additional transparent layer that can hold several partitions. Data is distributed across the drives in one of several ways called #RAID levels, depending on the level of redundancy and performance required. The RAID level chosen can thus prevent data loss in the event of a hard disk failure, increase performance or be a combination of both.
This article explains how to create/manage a software RAID array using mdadm.
RAID levels
Despite redundancy implied by most RAID levels, RAID does not guarantee that data is safe. A RAID will not protect data if there is a fire, the computer is stolen or multiple hard drives fail at once. Furthermore, installing a system with RAID is a complex process that may destroy data.
Standard RAID levels
There are many different levels of RAID; listed below are the most common.
RAID 0 Uses striping to combine disks. Even though it does not provide redundancy, it is still considered RAID. It does, however, provide a big speed benefit. If the speed increase is worth the possibility of data loss (for swap partition for example), choose this RAID level. On a server, RAID 1 and RAID 5 arrays are more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions. RAID 1 The most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap and temporary data. Please note that with a software implementation, the RAID 1 level is the only option for the boot partition, because bootloaders reading the boot partition do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition. RAID 5 Requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.
Note: RAID 5 is a common choice due to its combination of speed and data redundancy. The caveat is that if one drive were to fail and another drive failed before that drive was replaced, all data will be lost. Furthermore, with modern disk sizes and expected unrecoverable read error (URE) rates on consumer disks, the rebuild of a 4TiB array is expected (i.e. higher than 50% chance) to have at least one URE. Because of this, RAID 5 is no longer advised by the storage industry.
RAID 6 Requires 4 or more physical drives, and provides the benefits of RAID 5 but with security against two drive failures. RAID 6 also uses striping, like RAID 5, but stores two distinct parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 6 can withstand the loss of two member disks. The robustness against unrecoverable read errors is somewhat better, because the array still has parity blocks when rebuilding from a single failed drive. However, given the overhead, RAID 6 is costly and in most settings RAID 10 in far2 layout (see below) provides better speed benefits and robustness, and is therefore preferred.
Nested RAID levels
RAID 1+0 RAID1+0 is a nested RAID that combines two of the standard levels of RAID to gain performance and additional redundancy. It is commonly referred to as RAID10, however, Linux MD RAID10 is slightly different from simple RAID layering, see below. RAID 10 RAID10 under Linux is built on the concepts of RAID1+0, however, it implements this as a single layer, with multiple possible layouts. The near X layout on Y disks repeats each chunk X times on Y/2 stripes, but does not need X to divide Y evenly. The chunks are placed on almost the same location on each disk they are mirrored on, hence the name. It can work with any number of disks, starting at 2. Near 2 on 2 disks is equivalent to RAID1, near 2 on 4 disks to RAID1+0. The far X layout on Y disks is designed to offer striped read performance on a mirrored array. It accomplishes this by dividing each disk in two sections, say front and back, and what is written to disk 1 front is mirrored in disk 2 back, and vice versa. This has the effect of being able to stripe sequential reads, which is where RAID0 and RAID5 get their performance from. The drawback is that sequential writing has a very slight performance penalty because of the distance the disk needs to seek to the other section of the disk to store the mirror. RAID10 in far 2 layout is, however, preferable to layered RAID1+0 and RAID5 whenever read speeds are of concern and availability / redundancy is crucial. However, it is still not a substitute for backups. See the wikipedia page for more information.
Warning: mdadm cannot reshape arrays in far X layouts which means once the array is created, you will not be able to mdadm —grow it. For example, if you have a 4x1TB RAID10 array and you want to switch to 2TB disks, your usable capacity will remain 2TB. For such use cases, stick to near X layouts.
RAID level comparison
RAID level | Data redundancy | Physical drive utilization | Read performance | Write performance | Min drives |
---|---|---|---|---|---|
0 | No | 100% | nX |
Best; on par with RAID0 but redundant
How to Find Hardware RAID Information on Linux
If you want dont know, How to Find Hardware RAID Information on Linux, Then you have come to the right place. This post will help you to Identify your server has physical raid controller or not, if not they may be using software raid. Lets check for physical raid controller. If you are interested in learning, Request you to go through the below recommended tutorial.
How to Find Hardware RAID Information on Linux
In order to get any hardware information or specification about the server. you can use lspci command. This command will give us a very long output with all hardware details in low level format. So better use grep command to filter the specific information.
But We want only raid information from the server, use grep command to search for the word raid.
[root@selva ~]# lspci -vv | grep -i raid
01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 01)
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
The above example output shows exact raid controller specifications we are looking for. but we are not sure whether the raid has been configured or not. From the above output, it is confirmed that we have raid controller available in server.
Lets find any raid been configured or not.
In every linux, we have a file called /proc/scsi/scsi where all information about scsi device or disk will be mentioned by the kernel.
So, use cat command and open the file /proc/scsi/scsi
[root@selva ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: MegaRAID Model: LD 0 RAID5 699G Rev: 513O
Type: Direct-Access ANSI SCSI revision: 02
If your server configured with any raid, here you can find the details. In the above example it shows as RAID 5.
Nowadays all raid controller manufacturers provide you cmd based utility and web-based utility. This will make you to easily identify much more information.
For HP systems, this helps narrow down to a particular model or part number.
HP setups are fairly easy. You can cat /proc/driver/cciss/cciss* and receive an output like,
cciss1: HP Smart Array P800 Controller
Board ID: 0x3223103c
Firmware Version: 4.12
IRQ: 122
Logical drives: 2
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 217
Max # commands on controller since init: 386
Max SG entries since init: 31
Sequential access devices: 0
cciss/c1d0: 587.12GB RAID 1(1+0)
cciss/c1d1: 1000.17GB RAID 1(1+0)
Keep practicing and have fun. Leave your comments if any. Support Us: Share with your friends and groups.