Software raid under linux (Ubuntu 6.06 Dapper Drake)



First, I should disclaim that this is going to be a brain dump of the resources and experience of a quick setup of Ubuntu 6.06 on a software raid based storage system. This may be less than ideal, but is doable and seems relatively solid as a system. First off, what is RAID (redundant array of independant/(*inexpensive) disks) So, several cheap hard drives put together in an “interesting” way. Now, increasing storage size isn’t something I’m too interested in, after all there are myriad other ways of expanding storage in a linux system (not to mention huge drives getting cheaper by the day.) My goal here is redundancy, I want to be able to lose a drive and still have the data, so RAID1 is what I’m setting up. We won’t get sidetracked into the other types.


But isn’t software RAID a lousy way to do it. Typically people sneer over the idea of software RAID because it’s making the CPU do SOMETHING ELSE. Yes, there are hardware raid controllers, (there are psuedo-hardware controllers too that still require *gasp* software…) But, for this purpose, I thing software RAID should do fine. This system won’t be a desktop doing any kind of heavy lifting where we would hope for responsiveness (video editing, 3d work, etc. etc. etc.) It will be sitting in a corner collecting dust while serving as a repository for backups and perhaps for some virtual machine images for “in a pinch” scenarios.

OK, so I put a 250GB drive as master on the primary IDE chain and another 250 GB drive as master on the secondary IDE chain. Why not SATA? (The system doesn’t have SATA, it’s a recycled system, yes I could have ordered it, but I COULD have ordered a card with hardware RAID too and dual SCSI drives.) Anyway, I had to put the cdrom on the secondary chain as a slave (to get the install done.) For this you’ll need the Ubuntu 6.06 alternative install cd. Now there are a couple of good how-to’s that are a bit outdated, but still relevant. One is text only dealing with ubuntu 5.10 and Debian 3.1, but a combination of that page and this one that they refer to should help (if for no other reason, than the screenshots on the second page linked.) I settled on 100MB partitions to house the /boot directory, 6GB for what would be Swap, ~40GB for what would become / and 100GB each for /home and /var…. probably too generous in some areas and I couldn’t decide whether /home or /var should have more (and I could have shuffled things around differently.) The MAIN point of this though is that BOTH drives need to be partitioned the same before going through the “software raid” setup which allows you to create raid arrays with the partitions you’ve made. So, I finished the process with md0, md1, md2, md3 and md4 (Again, this may not be ideal…. but it seems to work well.)

In fact, after the system got up and running it was more responsive than I expected (the install seemed longer than I had thought it would take.) But before long I was installing freenxserver so I could administer from the desktop. One issue I did run into is continuous errors in /var/log/messages regarding /dev/hdd (our good old cdrom drive.) I removed it from the system and things behaved wonderfully. (The errors were accompanied with sluggishness which is one problem with having a device share the IDE chain with one of the RAID drives.) So, at this point, I’m wondering about 1) an addon IDE card or 2) external (USB) optical drive for moving certain things off to disc.

Linuxdevcenter has a good article as an introduction to mdadm, which is the tool for administering the software RAID arrays. There’s another article like this on configuring raid from the command line (Post install)

And, there is a good writeup On SATA RAID here.

One thing to keep in mind is that RAID1 will NOT save you from a catastrophic failure (meteor deciding to land on your system/fire/stampede of wild elephants/etc.) But it will protect against a drive failure. (MDADM also has support for keeping a spare drive in the array to sync to in case one drive has a problem.)

The next thing I hope to do is test how the system responds to various changes (disk pull/ swap/etc.) But, I think that may take place inside a VM. Ubuntu’s (relatively easy) implementation of software RAID seems like it might be a good way to ensure a bit better drive reliability for even home users. Yes, probably power users, but still more approachable then it’s ever been for the home user.

   Send article as PDF   

Similar Posts