Diary of a geek

October 2005
Mon Tue Wed Thu Fri Sat Sun
         
15
           

Andrew Pollock

Categories

Other people's blogs

Subscribe

RSS feed

Contact me

JavaScript required


Saturday, 15 October 2005

Fast clean chroot creation with LVM snapshots

Now that I've got a bit more disk space, I decided to fully script chroot creation with LVM snapshots.

This requires dchroot, LVM, and as as many logical volumes as you want chroots for, with a logical volume naming scheme like this:

apollock@caesar:~$ sudo lvs | grep pristine
  stable-pristine   base -wi-a- 320.00M
  testing-pristine  base -wi-a- 320.00M
  unstable-pristine base -wi-a- 320.00M

sudo makes life a bit easier as well.

Next, you need a directory structure like this:

apollock@caesar:~$ tree /chroots
/chroots
|-- pristine
|   |-- stable
|   |-- testing
|   `-- unstable
|-- stable
|-- testing
`-- unstable

Finally, you need some /etc/fstab entries to mount the chroots (and a /proc):

/dev/base/stable-pristine       /chroots/pristine/stable        ext3    defaults,noauto 0 0
stable-proc     /chroots/pristine/stable/proc   proc    defaults,noauto 0 0
/dev/base/testing-pristine      /chroots/pristine/testing       ext3    defaults,noauto 0 0
testing-proc    /chroots/pristine/testing/proc  proc    defaults,noauto 0 0
/dev/base/unstable-pristine     /chroots/pristine/unstable      ext3    defaults,noauto 0 0
unstable-proc   /chroots/pristine/unstable/proc proc    defaults,noauto 0 0

/dev/base/stable        /chroots/stable ext3    defaults,noauto 0 0
/dev/base/home          /chroots/stable/home    jfs     defaults,noauto 0 0
/dev/base/tmp           /chroots/stable/tmp     jfs     defaults,noauto 0 0
stable-proc     /chroots/stable/proc    proc    defaults,noauto 0 0
/dev/base/testing       /chroots/testing        ext3    defaults,noauto 0 0
/dev/base/home          /chroots/testing/home   ext3    defaults,noauto 0 0
/dev/base/tmp           /chroots/testing/tmp    ext3    defaults,noauto 0 0
testing-proc    /chroots/testing/proc   proc    defaults,noauto 0 0
/dev/base/unstable      /chroots/unstable       ext3    defaults,noauto 0 0
/dev/base/home          /chroots/unstable/home  ext3    defaults,noauto 0 0
/dev/base/tmp           /chroots/unstable/tmp   ext3    defaults,noauto 0 0
unstable-proc   /chroots/unstable/proc  proc    defaults,noauto 0 0

Note that you don't need to bother double-mounting /home and /tmp in the "pristine" chroots, because generally speaking, only root will be logging into them, for the purposes of installing packages or upgrading what's already installed.

So firstly, create the logical volumes that are going to hold the "pristine" chroots. Put your favourite filesystem on them, and mount them. Then use debootstrap to install a base installation. I found I had more success doing an installation of sarge into the stable chroot's logical volume, and then dd'ing that across to the testing and unstable logical volumes, and doing a dist-upgrade afterwards.

Once you've got your base chroots installed, add entries to /etc/dchroot.conf for them, as well as the subsequent snapshot ones:

unstable /chroots/unstable
testing /chroots/testing
stable /chroots/stable

unstable-pristine /chroots/pristine/unstable
testing-pristine /chroots/pristine/testing
stable-pristine /chroots/pristine/stable

Then use dchroot (as root) to log into each "pristine" chroot in turn and install build-essential, fakeroot, and whatever else you want to have consistently installed in each instance of the chroot.

Once you're done with this, you can use the couple of scripts I've knocked up for easily creating an instance of one of these pristine chroots. You can then install whatever packages you like into these instances, build your packages, and then when you're finished, just throw away the logical volume. You can rinse and repeat this process as much as you like, and it's as quick as creating a snapshot logical volume, giving you a clean chroot to start with every time.

[07:37] [debian] [permalink]

I love LVM.

There is sliced bread, and then there is LVM.

Today I had a pleasantly easy time migrating one of my servers from a 40Gb disk to a 120Gb disk, thanks to LVM.

Background

A few months ago, caesar, my general purpose box, blew up. It's motherboard was of the vintage that couldn't cope with a disk larger than 32Gb, so it had a 40Gb hard drive in it, jumpered to look like a 32Gb disk. Apparently I could have probably fudged the geometry of a larger disk in the BIOS so that it would boot, but I survived with a small disk, and wasn't keen on reinstalling at the time.

When I replaced caesar, it was capable of using a larger disk, but I just did a direct disk swap from the old caesar to the new one, and got on with my life. I subsequently replaced daedalus, my web server in Brisbane, which made two 120Gb disks available.

Recently, I started hitting the limits of the 32Gb disk, and the old daedalus (recycled into minotaur) was just sitting around with a 120Gb disk in it, not doing very much, so I decided to try and migrate from the 40Gb disk to the 120Gb disk.

Partition layout

The way I generally partition a disk is I have a 512Mb partition for my root filesystem, a swap partition as big as the swap allocation recipe I'm subscribing to at the time, and use the rest as a physical volume for LVM.

Copying the root filesystem

So I took the 40Gb disk out of caesar, put it into minotaur as the primary disk with the 120Gb disk as the slave, and booted into single-user mode. Next, I created a partition for the root filesystem on the 120Gb disk the same size as it was on the 40Gb disk. I ensured the root filesystem was mounted read-only, so everything would be consistent, and used dd to copy /dev/hda1 to /dev/hdb1. Next, I shut down and reconnected /dev/hdb (the 120Gb disk) as /dev/hda to make sure I could boot from it okay. I think because this disk had previously had minotaur's Linux installation on it, with GRUB, this worked fine. I'd probably have had to dick around with installing GRUB in the MBR otherwise.

Moving the logical volumes

Once I was satisfied that I could boot from the 120Gb disk, I swapped back again so I was booting from the 40Gb disk as /dev/hda, and again booted into single-user mode. I did a pvcreate on /dev/hdb3, and then a pvmove /dev/hda3 /dev/hdb3 and sat back and twiddled by thumbs for a while.

At about 72 percent, I got a kernel oops and the pvmove bailed out. I started to worry a bit at this point, and retried the pvmove without any arguments. According to the manpage, it's supposed to restart from the last checkpoint. In hindsight I should have rebooted straight away, as the kernel obviously now had its knickers in a knot. The pvmove didn't seem to progress, and I couldn't interrupt it, so I had to do a hard reset. As Debian's single-user mode tends to do a hell of a lot (including mounting all the filesystems), the mounting of one of my ReiserFS filesystems seemed to also cause the kernel to oops also. So I rebooted with the "emergency" argument instead, and manually ran just enough of the rcS.d scripts to get the logical volumes available, and reran the pvmove again. This time it completed successfully.

I then used vgremove to remove /dev/hda3, which no longer had any extents allocated to it, from my volume group, and then did a pvremove on it for good measure. I disconnected the 40Gb disk, and booted with the 120Gb disk as the master, and all was good.

I put the 120Gb disk back into caesar, and if I hadn't had to pull it out of the "rack" to stick a head on it to discover that it wanted me to press F1 because the 40Gb disk had turned into a 120Gb disk, that part would have been interventionless.

So I was very pleased with how easy the whole process was. If I hadn't had those couple of kernel oopses, it would have been a piece of cake (and the oopses didn't really give me that much grief anyway, thanks to the checkpointing). So LVM would be great with an environment where SMART was accurately predicting the demise of a disk. You could ideally migrate all the data off a failing disk, probably without rebooting, and if the disks were hot pluggable, just remove it from the system without any downtime. Of course, it's no substitute for a good bit of RAID, but pretty cool nonetheless.

[03:08] [tech] [permalink]