Intel rst driver linux
I recently updated my main system to a new Intel Sandybridge based setup which included a Asus P8Z68-V motherboard, as you may have seen. One of the features of the Intel Z68 chipset upon which this motherboard is based is Rapid Storage Technology (RST) which aims to use a Solid State Drive to cache the files your system uses frequently to improve system performance. While many people choose to install their operating system on a SSD and use a cheaper mechanical drive for applications and data, RST seems a better system as the most frequently used files, whatever they are, are available for fast access. Working on some code? Visual Studio and my project files are cached, shift to some 3D work? Cool, when it’s used, Blender and its files are put in the cache, all the while parts of windows that aren’t used are still available but not taking up expensive, fast SSD space (has anyone actually looked at the help file for MS Paint?). The system’s not perfect – things will only be cached the second (or more, depending on how the driver assess the “frequently used” nature of files) time they are accessed. Also as this is (at least partially) done in the software of the driver, there’s obviously an overhead on the running of the system, however overall it struck me as a good idea and pushed me into purchasing a SSD (a 128GB Crucial M4) to see if they really did transform the performance of my system as much as I’d been led to believe.
My initial plan was to have two 1TB mechanical drives, with a Windows 7 installation on one, with RST accelerating it, a Linux installation on the remaining space of the SSD, with a software RAID setup using the remaining space for my /home partition to hold data and local applications.
Installing Windows and activating the RST was a reasonably straightforward process, with a couple of observations:
- Both the SSD and the accelerated drive must be on the SATA 3 ports of the motherboard (and the SATA 3 ports of the chipset, any additional ports added by a third party chip on your motherboard won’t do).
- Despite saying that you can accelerate “a drive or volume” with RST, I was unable to apply it to a RAID volume setup in the BIOS. This may be due to the RAID setup requring to use a SATA 2 port, but may simply mean that RST cannot be used on RAID volumes.
- RST allows you to use either 18GB or the maximum space (either 64GB, or the full capacity of the drive, whichever is less) for cacheing, there is no fine tuning options.
So I now had a working windows system, so in went the Linux disc, which despite windows seeing the spare space on the SSD as a blank drive, didn’t recognise any of the partitions. There then followed about a week of fighting various incompatibilities and problems. The following is a brief outline of the steps I went through to get a Dual boot system up and running. If you want one word of advice, “don’t!” Were I to do this again, I’d get two SSDs, one for caching with SRT and another separate drive to install Linux onto. However if you’re already half way down this path, or keen to do it for some other reason, then feel free to use my steps as a starting point. If you’ve any improvements, please let me know – I don’t claim that this is the best solution, but it’s what I ended up doing to get up and running with a dual-boot system.
- Put Windows on a separate drive, I had a spare 250GB drive, which should be plenty for my requirements for the moment. I found that the conflicts between the mdadm style of RAID and the dmraid system were just too big to deal with.
- I started with a copy of Debian Stable with an backported 2.6.39 kernel (this was necessary to use many features of the Sandybridge chipset – even the Ethernet adapter wouldn’t work with the stock kernel) from the Debian-Installer backports archive. (Note this is a very useful archive for anyone who is keen to install Debian on more recent hardware than the standard stable supports.)
- At this point, I had a system that would detect the spare space on the SSD and install to it, but failed to install Grub2. This caused me much frustration, And I was unable to progress for some time.
- I then managed to boot a live disc of the latest *buntu release (11.10) with a 3.0.x kernel, once I had done this and apt-get installed dmraid, I was then able to see my new Debian installation on the SSD. (However when I had tried this trick to install using a *buntu disc I was unable to make the SSD partition appear in the installer.)
- I then managed to open my Debian installation by opening a terminal and running the following:
- dmraid -ay
- mkdir /myraid
- mount /dev/md0 /myraid
- mount –bind /dev /myraid/dev
- mount -t proc proc /myraid/proc
- mount -t sysfs sysfs /myraid/sys
- chroot /myraid
- apt-get install mdadm
- exit
I was then left with a pair of bootable OSs on my system, a lost week of work and a lot of catching up to do. I’m sure I’m not the first person to go through this, I’m probably not running the neatest system as a result, but I have got up and running on something that appears to be undocumented. I repeat my advice that if you have a choice, find some other solution rather than going down this route, but if you have to, I hope that my experiences prove useful. Feel free to comment if you have improvements on my method or suggestions on how to improve this post.
I must however point out that my system is now lightning fast to boot. Both operating systems get to a login screen in about 10 seconds and are ready to work about 5 seconds after entering my password. Responsiveness is greatly improved when clicking the mouse to open a program or document. Not cheap, but one of the best ways to speed up your system for £100, I would say.
I also should say that my Father has had a considerably slower experience. Despite having a faster processor (an i7, no less) he has the same SSD and a Gigabyte GA-Z68X-UD3H motherboard, which not only pauses after the BIOS with a message saying “Loading Operating System…” for 20 seconds, but it takes around 45 seconds after that for Windows 7 to be loaded and ready to use.
11 Comments to “ HOWTO: Using Intel’s RST with Linux. ”
You can follow all the replies to this entry through the comments feed.
Thanks! This is very helpful and informative.
Do you know if a setup of Windows and Linux each on their own HDD, with _both_ using RST with a single SSD, would work? Would you need two SSDs?As far as I know, RST isn’t directly supported under Linux. As I believe that some of the processing is done in the Windows driver (much like many of the “Fakeraid” systems), I doubt it will be easy to support under Linux (though no doubt possible if it becomes popular enough). I looked but didn’t find any similar file system caching built into Linux. Feel free to enlighten me if you know of something that does the job! That’s why I’m running with my Linux install simply installed onto the SSD.
I’m looking at a Gigabyte Z68 motherboard as well… I don’t think I’ll be making use of this nice caching just yet, since it seems way too complex to get setup in a non-Windows environment. I wanted to ask though – are the pause issues still there?
Last time I saw my father’s setup the pause was a lot less pronounced. I suspect it’s simply the BIOS looking for more boot devices. It may be speedier if you disable unused boot media in the BOIS.
Hi,
I was frustrated too with this setup until I discovered another way (as you, just a way it works).
– First, install Windows and activate SRT
– After installation and activation, create, on Windows, the needed partitions to install Linux on SSD.
– Now, boot your Linux installation device and do not run dmraid. You will see the created partitions on /dev/sda and /dev/sdb; select mount points, format then and install..
– As installing grub on SSD, cache is disabled but booting Windows and activating it, all works fine.
Thats all. I use a 500GB SATA2 disk and a SATA3 SSD. Windows is installed on a partition on HD; the nice feature of SRT: you can use a mechanical HD almost as if you were using a SSD. I installed W7 with GPT on EFI; not tested with MBR. In order to boot W7, new disks added to the system must not contain MBR (do not know why).
If you try to install Linux (without dmraid) creating partitions on SSD, cache is destroyed. If you use “dmraid” to recognize fake RAID Volumes, you cannot access HD partitions because they are protected to be used out from RAID volume. You just can use the no cache Volume on SSD.
When you create partitions on Windows and go to Linux you will be warned about GPT errors. Do not worry, it works.
I discovered this because I wanted to do a mixed installation of linux on both disks and needed a common data partition NTFS formatted on SSD. I think RAID cache works on full hard disk (all Windows partitions on cached disk).
I hope it could be helpful.
Best regards.
PS.: Sorry if my English is not good enaugh. Feel free to correct it for a better understanding.Antonio, I would like to make sure I understood what you achieved, You have one HD and one SSD, you have installed W7 in the HD and use the SRT on the SSD, the remaining of the SSD (SSD capacity, minus 64gb or 18gb) you have installed Linux. Is this correct? Do you know if the HD can be partitioned so that the /home can be placed in the HD next to W7? Thank you,
Sorry for the delay.
Yes, you can partition your disk as you want.
If you do not activate fake RAID (on Linux) you can access any HD partition: Linux partitions are secure as they are not cached (not accessed from Windows).Sounds good, Antonio! Glad you got there, it sounds like you had a lot less trouble than me! Can I ask what version of Linux you’re using and what Kenel that has, please? It sounds like the drivers have come on a bit.
Really. I had many problems (at first) trying to use RAID, with several distros: my final solution was to disable SRT.
In any case, this solution has worked with Mageia 2: kernel 3.3.6.I have dell inspiron 15se with Intel Smart Response Tech…. dmraid -ay fails and says that RAID was not activated. bad luck…
I can add some observations regarding Ubuntu (13.10) install on SRT enabled SSD.
1) Ubuntu live CD (or alternate one) does not see the free part of such solid state drive as a possible place to install to. To overcome that one has to load: “modprobe dm_mod” and “sudo dmraid -ay” and then miraculously the drive and partitions become detected and accessible. If “drive was not activated” reboot and try once again.
2) It is strongly recommended to use gparted to partition the drive before installing linux. If one allows the installer to do so the most probably result would be corrupted partition table and mysterious error during install (window with “. ” text) .
3) Do not install grub during the Ubuntu install. If run from live cd use: “ubiquity –no-bootloader” to start install process. Especially do not try to install grub to the partition boot loader area (not disk’s MBR). It will be installed to MBR instead (without warning!) and destroy everything that was placed there before (bootloader of other OS for instance).
4) To boot Ubuntu for the first time use SuperGrub2 disk. Even RAID support should not be necessary. SuperGrub will find the system nevertheless.
5) Install grub from within booted Linux. Do not try to install it from the outside (chroot with livecd). During the procedure grub will be offering places to install loader to. Pick your system drive (sda most probably or its partition sda1 etc.). Yes, grub should see the ssd drive not through the mapper but sdX. This is the keypoint. That way one avoids problems with dmraid not detecting the array on the preboot phase. No need to edit initramfs to add dmraid -ay.
6) And then install dmraid to see other raid volumes if any.
That works on Ubuntu. Even installing grub to partition boot area (not MBR) works. Leave some free room before the first partition on ssd drive if grub reports problems which require –force argument to solve.