Detecting, querying and testing
This section is about life with a software RAID system, that’s communicating with the arrays and tinkertoying them.
Note that when it comes to md devices manipulation, you should always remember that you are working with entire filesystems. So, although there could be some redundancy to keep your files alive, you must proceed with caution.
Detecting a drive failure
Firstly: mdadm has an excellent ‘monitor’ mode which will send an email when a problem is detected in any array (more about that later).
Of course the standard log and stat files will record more details about a drive failure.
It’s always a must for /var/log/messages to fill screens with tons of error messages, no matter what happened. But, when it’s about a disk crash, huge lots of kernel errors are reported. Some nasty examples, for the masochists,
kernel: scsi0 channel 0 : resetting for second half of retries. kernel: SCSI bus is being reset for host 0 channel 0. kernel: scsi0: Sending Bus Device Reset CCB #2666 to Target 0 kernel: scsi0: Bus Device Reset CCB #2666 to Target 0 Completed kernel: scsi : aborting command due to timeout : pid 2649, scsi0, channel 0, id 0, lun 0 Write (6) 18 33 11 24 00 kernel: scsi0: Aborting CCB #2669 to Target 0 kernel: SCSI host 0 channel 0 reset (pid 2644) timed out - trying harder kernel: SCSI bus is being reset for host 0 channel 0. kernel: scsi0: CCB #2669 to Target 0 Aborted kernel: scsi0: Resetting BusLogic BT-958 due to Target 0 kernel: scsi0: *** BusLogic BT-958 Initialized Successfully ***
Most often, disk failures look like these,
kernel: sidisk I/O error: dev 08:01, sector 1590410 kernel: SCSI disk error : host 0 channel 0 id 0 lun 0 return code = 28000002
kernel: hde: read_intr: error=0x10 < SectorIdNotFound >, CHS=31563/14/35, sector=0 kernel: hde: read_intr: status=0x59
And, as expected, the classic /proc/mdstat look will also reveal problems,
Personalities : [linear] [raid0] [raid1] [translucent] read_ahead not set md7 : active raid1 sdc9[0] sdd5[8] 32000 blocks [2/1] [U_]
Later on this section we will learn how to monitor RAID with mdadm so we can receive alert reports about disk failures. Now it’s time to learn more about /proc/mdstat interpretation.
Querying the array status
You can always take a look at the array status by doing cat /proc/mdstat It won’t hurt. Take a look at the /proc/mdstat page to learn how to read the file.
Finally, remember that you can also use mdadm to check the arrays out.
These commands will show spare and failed disks loud and clear.
Simulating a drive failure
If you plan to use RAID to get fault-tolerance, you may also want to test your setup, to see if it really works. Now, how does one simulate a disk failure?
The short story is, that you can’t, except perhaps for putting a fire axe thru the drive you want to «simulate» the fault on. You can never know what will happen if a drive dies. It may electrically take the bus it is attached to with it, rendering all drives on that bus inaccessible. The drive may also just report a read/write fault to the SCSI/IDE/SATA layer, which, if done properly, in turn makes the RAID layer handle this situation gracefully. This is fortunately the way things often go.
Remember, that you must be running RAID- for your array to be able to survive a disk failure. Linear- or RAID-0 will fail completely when a device is missing.
Force-fail by hardware
If you want to simulate a drive failure, you can just plug out the drive. If your HW does not support disk hot-unplugging, you should do this with the power off (if you are interested in testing whether your data can survive with a disk less than the usual number, there is no point in being a hot-plug cowboy here. Take the system down, unplug the disk, and boot it up again)
Look in the syslog, and look at /proc/mdstat to see how the RAID is doing. Did it work? Did you get an email from the mdadm monitor?
Faulty disks should appear marked with an (F) if you look at /proc/mdstat. Also, users of mdadm should see the device state as faulty.
When you’ve re-connected the disk again (with the power off, of course, remember), you can add the «new» device to the RAID again, with the mdadm —add’ command.
Force-fail by software
You can just simulate a drive failure without unplugging things. Just running the command
mdadm --manage --set-faulty /dev/md1 /dev/sdc2
should be enough to fail the disk /dev/sdc2 of the array /dev/md1.
Now things move up and fun appears. First, you should see something like the first line of this on your system’s log. Something like the second line will appear if you have spare disks configured.
kernel: raid1: Disk failure on sdc2, disabling device. kernel: md1: resyncing spare disk sdb7 to replace failed disk
Checking /proc/mdstat out will show the degraded array. If there was a spare disk available, reconstruction should have started.
Another useful command at this point is:
Now you’ve seen how it goes when a device fails. Let’s fix things up.
First, we will remove the failed disk from the array. Run the command
Note that mdadm cannot pull a disk out of a running array. For obvious reasons, only faulty disks can be hot-removed from an array (even stopping and unmounting the device won’t help — if you ever want to remove a ‘good’ disk, you have to tell the array to put it into the ‘failed’ state as above).
Now we have a /dev/md1 which has just lost a device. This could be a degraded RAID or perhaps a system in the middle of a reconstruction process. We wait until recovery ends before setting things back to normal.
So the trip ends when we send /dev/sdc2 back home.
As the prodigal son returns to the array, we’ll see it becoming an active member of /dev/md1 if necessary. If not, it will be marked as a spare disk. That’s management made easy.
Simulating data corruption
RAID (be it hardware or software), assumes that if a write to a disk doesn’t return an error, then the write was successful. Therefore, if your disk corrupts data without returning an error, your data will become corrupted. This is of course very unlikely to happen, but it is possible, and it would result in a corrupt filesystem.
RAID cannot, and is not supposed to, guard against data corruption on the media. Therefore, it doesn’t make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted.
This is the way things are supposed to work. RAID is not a guarantee for data integrity, it just allows you to keep your data if a disk dies (that is, with RAID levels above or equal one, of course).
Monitoring RAID arrays
You can run mdadm as a daemon by using the follow-monitor mode. If needed, that will make mdadm send email alerts to the system administrator when arrays encounter errors or fail. Also, follow mode can be used to trigger contingency commands if a disk fails, like giving a second chance to a failed disk by removing and reinserting it, so a non-fatal failure could be automatically solved.
Let’s see a basic example. Running
mdadm --monitor --daemonise --mail=root@localhost --delay=1800 /dev/md2
should release a mdadm daemon to monitor /dev/md2. The —daemonise switch tells mdadm to run as a deamon. The delay parameter means that polling will be done in intervals of 1800 seconds. Finally, critical events and fatal errors should be e-mailed to the system manager. That’s RAID monitoring made easy.
Finally, the —program or —alert parameters specify the program to be run whenever an event is detected.
Note that, when supplying the -f switch, the mdadm daemon will never exit once it decides that there are arrays to monitor, so it should normally be run in the background. Remember that your are running a daemon, not a shell command. If mdadm is ran to monitor without the -f switch, it will behave as a normal shell command and wait for you to stop it.
Using mdadm to monitor a RAID array is simple and effective. However, there are fundamental problems with that kind of monitoring — what happens, for example, if the mdadm daemon stops? In order to overcome this problem, one should look towards «real» monitoring solutions. There are a number of free software, open source, and even commercial solutions available which can be used for Software RAID monitoring on Linux. A search on FreshMeat should return a good number of matches.
How to check ‘mdadm’ RAIDs while running?
I’m starting to get a collection of computers at home and to support them I have my «server» linux box running a RAID array. Its currently mdadm RAID-1 , going to RAID-5 once I have more drives (and then RAID-6 I’m hoping for). However I’ve heard various stories about data getting corrupted on one drive and you never noticing due to the other drive being used, up until the point when the first drive fails, and you find your second drive is also screwed (and 3rd, 4th, 5th drive). Obviously backups are important and I’m taking care of that also, however I know I’ve previously seen scripts which claim to help against this problem and allow you to check your RAID while its running. However looking for these scripts again now I’m finding it hard to find anything which seems similar to what I ran before and I feel I’m out of date and not understanding whatever has changed. How would you check a running RAID to make sure all disks are still preforming normally? I monitor SMART on all the drives and also have mdadm set to email me in case of failure but I’d like to know my drives occasionally «check» themselves too.
Sounds like you’re already on the right path, you just need to setup a cron to send you the results of smartctl for your drives.
5 Answers 5
The point of RAID with redundancy is that it will keep going as long as it can, but obviously it will detect errors that put it into a degraded mode, such as a failing disk. You can show the current status of an array with mdadm —detail (abbreviated as mdadm -D ):
# mdadm -D /dev/md0 0 8 5 0 active sync /dev/sda5 1 8 23 1 active sync /dev/sdb7
Furthermore the return status of mdadm -D is nonzero if there is any problem such as a failed component (1 indicates an error that the RAID mode compensates for, and 2 indicates a complete failure).
You can also get a quick summary of all RAID device status by looking at /proc/mdstat . You can get information about a RAID device in /sys/class/block/md*/md/* as well; see Documentation/md.txt in the kernel documentation. Some /sys entries are writable as well; for example you can trigger a full check of md0 with echo check >/sys/class/block/md0/md/sync_action .
In addition to these spot checks, mdadm can notify you as soon as something bad happens. Make sure that you have MAILADDR root in /etc/mdadm.conf (some distributions (e.g. Debian) set this up automatically). Then you will receive an email notification as soon as an error (a degraded array) occurs.
Make sure that you do receive mail send to root on the local machine (some modern distributions omit this, because they consider that all email goes through external providers — but receiving local mail is necessary for any serious system administrator). Test this by sending root a mail: echo hello | mail -s test root@localhost . Usually, a proper email setup requires two things:
- Run an MTA on your local machine. The MTA must be set up at least to allow local mail delivery. All distributions come with suitable MTAs, pick anything (but not nullmailer if you want the email to be delivered locally).
- Redirect mail going to system accounts (at least root ) to an address that you read regularly. This can be your account on the local machine, or an external email address. With most MTAs, the address can be configured in /etc/aliases ; you should have a line like
root: djsmiley2k@mail-provider.example.com