Linux mdadm vs lvm

RAIDing with LVM vs MDRAID — pros and cons?

  1. There is a detailed discussion at http://www.olearycomputers.com/ll/linux_mirrors.html but I could not find out what date it was written on. Similar question on Serverfault: linux LVM mirror vs. MD mirror. However, this question was asked in 2010, and the answers may be out of date.
  2. The changelog entry for version 2.02.87 — 12th August 2011 has

Also, there’s the usualy advantage that you’re working with logical volumes instead of md volumes. So you have lvextend and pvmove available for moving between devices whereas with md the process is a lot more manual without clear benefit.

LVM has always supported raid1 and raid0. It was more recently that they dropped their own implementation and instead internally use md’s raid personality code, opening up the other raid levels.

2 Answers 2

How mature and featureful is LVM RAID?

LVM-RAID is actually mdraid under the covers. It basically works by creating two logical volumes per RAID device (one for data, called «rimage»; one for metadata, called «rmeta»). It then passes those off to the existing mdraid drivers. So things like handling disk read errors, I/O load balancing, etc. should be fairly mature.

Tools

You can’t use mdadm on it (at least, not in any easy way¹) and the LVM RAID tools are nowhere near as mature. For example, in Debian Wheezy, lvs can’t tell you RAID5 sync status. I very much doubt repair and recovery (especially from «that should never happen!» situations) is anywhere near as good as mdadm (and I accidentally ran into one of those in my testing, and finally just gave up on recovering it—recovery with mdadm would have been easy).

Especially if you’re not using the newest versions of all the tools, it gets worse.

Missing Features

Current versions of LVM-RAID do not support shrinking ( lvreduce ) a RAID logical volume. Nor do they support changing the number of disks or RAID level ( lvconvert gives an error message saying not supported yet). lvextend does work, and can even grow RAID levels that mdraid only recently gained support for, such as RAID10. In my experience, extending LVs is much more common than reducing them, so that’s actually reasonable.

Some other mdraid features aren’t present, and especially you can’t customize all the options you can in with mdadm.

On older versions (as found in, for example, Debian Wheezy), LVM RAID does not support growing, either. For example, on Wheezy:

root@LVM-RAID:~# lvextend -L+1g vg0/root Extending logical volume root to 11.00 GiB Internal error: _alloc_init called for non-virtual segment with no disk space. 

In general, you don’t want to run the Wheezy versions.

The above is once you get it installed. That is not a trivial process either.

Tool problems

Playing with my Jessie VM, I disconnected (virtually) one disk. That worked, the machine stayed running. lvs , though, gave no indication the arrays were degraded. I re-attached the disk, and removed a second. Stayed running (this is raid6). Re-attached, still no indication from lvs . I ran lvconvert —repair on the volume, it told me it was OK. Then I pulled a third disk. and the machine died. Re-inserted it, rebooted, and am now unsure how to fix. mdadm —force —assemble would fix this; neither vgchange nor lvchange appears to have that option (lvchange accepts —force , but it doesn’t seem to do anything). Even trying dmsetup to directly feed the mapping table to the kernel, I could not figure out how to recover it.

Also, mdadm is a dedicated tool just for managing RAID. LVM does a lot more, but it feels (and I admit this is pretty subjective) like the RAID functionality has sort of been shoved in there; it doesn’t quite fit.

How do you actually install a system with LVM RAID?

Here is a brief outline of getting it installed on Debian Jessie or Wheezy. Jessie is far easier; note if you’re going to try this on Wheezy, read the whole thing first…

  1. Use a full CD image to install, not a netinst image.
  2. Proceed as normal, get to disk partitioning, set up your LVM physical volumes. You can put /boot on LVM-RAID (on Jessie, and on Wheezy with some work detailed below).
  3. Create your volume group(s). Leave it in the LVM menu.
  4. First bit of fun—the installer doesn’t have the dm-raid.ko module loaded, or even available! So you get to grab it from the linux-image package that will be installed. Switch to a console (e.g., Alt — F2 ) and:
cd /tmp dpkg-deb --fsys-tarfile /cdrom/pool/main/l/linux/linux-image-*.deb | tar x depmod -a -b /tmp modprobe -d /tmp dm-raid 
lvcreate --type raid5 -i 4 -I 256 -L 10G -n root vg0 
lvcreate --type raid1 -m1 -L 1G -n swap0 vg0 /dev/vda1 /dev/vdb1 lvcreate --type raid1 -m1 -L 1G -n swap1 vg0 /dev/vdc1 /dev/vdd1 
chroot /target /bin/bash mount /sys dpkg -i grub-pc_*.deb grub-pc-bin_*.deb grub-common_*.deb grub2-common_*.deb grub-install /dev/vda … grub-install /dev/vdd # for each disk echo 'dm_raid' >> /etc/initramfs-tools/modules update-initramfs -kall -u update-grub # should work, technically not quite tested² umount /sys exit 

Community Knowledge

There are a fair number of people who know about mdadm , and have a lot of deployment experience with it. Google is likely to answer most questions about it you have. You can generally expect a question about it here to get answers, probably within a day.

The same can’t be said for LVM RAID. It’s hard to find guides. Most Google searches I’ve run instead find me stuff on using mdadm arrays as PVs. To be honest, this is probably largely because it’s newer, and less commonly used. Somewhat, it feels unfair to hold this against it—but if something goes wrong, the much larger existing community around mdadm makes recovering my data more likely.

Conclusion

LVM-RAID is advancing fairly rapidly. On Wheezy, it isn’t really usable (at least, without doing backports of LVM and the kernel). Earlier, in 2014, on Debian testing, it felt like an interesting, but unfinished idea. Current testing, basically what will become Jessie, feels like something that you might actually use, if you frequently need to create small slices with different RAID configurations (something that is an administrative nightmare with mdadm ).

If your needs are adequately served by a few large mdadm RAID arrays, sliced into partitions using LVM, I’d suggest continuing to use that. If instead you wind up having to create many arrays (or even arrays of logical volumes), consider switching to LVM-RAID instead. But keep good backups.

A lot of the uses of LVM RAID (and even mdadm RAID) are being taken over by things like cluster storage/object systems, ZFS, and btrfs. I recommend also investigating those, they may better meet your needs.

Thank yous

I’d like to thank psusi for getting me to revisit the state of LVM-RAID and update this post.

Footnotes

  1. I suspect you could use device mapper to glue the metadata and data together in such a way that mdadm —assemble will take it. Of course, you could just run mdadm on logical volumes just fine. and that’d be saner.
  2. When doing the Wheezy install, I failed to do this first time, and wound up with no grub config. I had to boot the system by entering all the info at the grub prompt. Once booted, that worked, so I think it’ll work just fine from the installer. If you wind up at the grub prompt, here are the magic lines to type:
linux /boot/vmlinuz-3.2.0-4-amd64 root=/dev/mapper/vg0-root initrd /boot/initrd.image-3.2.0-4-amd64 boot 

PS: It’s been a while since I actually did the original experiments. I have made my original notes available. Note that I have now done more recent ones, covered in this answer, and not in those notes.

Источник

mdadm vs lvm

Сейчас везде использую LVM поверх заркала mdadm, подумываю мигрировать на raid1 средствами LVM. Кто использовал и то, и другое, как оно в плане надёжности?

$ sudo lvs vg1/ LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home vg1 rwi-aor--- 20.00g 100.00 owncloud vg1 rwi-aor--- 15.00g 100.00 

Хотелось бы услышать success story. Как оно себя ведёт при выходе диска из строя, при паниках ядра, при внезапном отключении сервера.

Диск не умирал. Юзаю raid1 в lvm — бывали и фризы и иные проблемы, раздел завернутый в lvm(raw диск с ntfs для виртуалки) жив здоров.

Если устраивает текущая реализация — какой смысл в нововведениях? потом локти кусать?

1. Лишняя прослойка
2. Экономия места (не все тома надо зеркалировать).

Тома автоматом синхронизируются при возникновении проблем?

Подними виртуалку и поиграйся с выходом дисков из строя и прочими катастрофами. Тоже самое сделай с mdraid. Нам расскажешь.

в случае развала линупс-рейда данные из lvm выковырять будет практически невозможно. Сам не сталкивался, лично знакомый камрад по tcs! russia нажрался. Но зеркал это, понятное дело, не касается. Я думаю, что надо рассматривать конкретный случай. Ответы разные будут. Если ты отдаёшь под линупс-рейд целиком блочное устройство, а не разделы на нём, то такая схема проще для восстановления при вылете одного из зеркал. Этот случай очень похож на случай использования аппаратных рейдов. С другой стороны, если у тебя на дисках не будет разделов вообще, а только lvm, то можно делать зеркалируемыми только нужные тебе волумы. А это экономия места. В общем, следует рассматривать конкретный случай.

Источник

Читайте также:  Linux lock user account
Оцените статью
Adblock
detector