Tuesday, December 14, 2010

Ubuntu Natty Narwhal Cluster Compute Instances

Some time ago, Amazon announced two new instance types aimed at high performance computing. The new types differ from Amazon's previous offerings in that

  • They use Xen "hvm" (Hardware Virtualization Mode) rather than 'paravirtualization'
  • Only priviledged accounts can create images of the 'hvm' virtualization-type.

The result is that there are very few public images for cluster compute nodes, and up until today, there were no Ubuntu images.

I'm happy to announce that you can now run Official Ubuntu images on cluster compute instance types. From today forward we will be publishing daily builds of Natty Narwhal builds.

These images are identical to the other Ubuntu images. For AMI ids, you can browse the list at http://uec-images.ubuntu.com/server/natty/current/, or use more machine friendly data at http://uec-images.ubuntu.com/query.

There is one known bug (bug 690286) that prevents you from using ephemeral storage on the CC nodes.

If you've got a couple dollars burning a whole in your pocket, you can try one out with:

qurl="http://uec-images.ubuntu.com/query/"
ami_id=$(curl --silent "${qurl}/natty/server/daily.current.txt" |
    awk '-F\t' '$11 == "hvm" && $7 == "us-east-1" { print $8 }')
ec2-run-instances --key mykey --instance-type cc1.4xlarge "${ami_id}"

Tuesday, December 7, 2010

lvm resizing is easy

I have a local mirror of the ubuntu archive, using some scripts based on the Ubuntu Wiki. When I set up "/archive" on my local mirror, I used lvm. The reason for that was primarily so that I could use sbuild with lvm.

Since then, 2 things have happened:
  • sbuild has gained the ability to use aufs rather than LVM snapshots. The solution is much lighter weight, and doesn't require some lvm space sitting around waiting to be used.
  • The ubuntu archive has grown in size from ~ 250G to ~ 400G right now.

So, it was time for me to resize the filesystem that had my archive up to accommodate. For my own record, and possibly others, I thought I'd share what I did.


$ sudo pvscan
PV /dev/sdb VG smoser-vol1 lvm2 [931.51 GiB / 315.57 GiB free]
PV /dev/sda1 VG nelson lvm2 [148.77 GiB / 44.00 MiB free]
$ sudo lvscan
ACTIVE '/dev/smoser-vol1/smlv0' [585.94 GiB] inherit
ACTIVE '/dev/smoser-vol1/hardy_chroot-i386' [5.00 GiB] inherit
ACTIVE '/dev/smoser-vol1/lucid_chroot-i386' [5.00 GiB] inherit
ACTIVE '/dev/smoser-vol1/karmic_chroot-i386' [5.00 GiB] inherit
ACTIVE '/dev/smoser-vol1/karmic_chroot-amd64' [5.00 GiB] inherit
ACTIVE '/dev/smoser-vol1/lucid_chroot-amd64' [5.00 GiB] inherit
ACTIVE '/dev/smoser-vol1/hardy_chroot-amd64' [5.00 GiB] inherit
ACTIVE '/dev/nelson/root' [142.65 GiB] inherit
ACTIVE '/dev/nelson/swap_1' [6.07 GiB] inherit


I had 2 physical volumes, sdb and sda1. 'sdb' had my old sbuild snapshots on it, and also some free space. So, I deleted sbuild snapshots with:


$ sudo lvremove /dev/smoser-vol1/hardy_chroot-i386 \
/dev/smoser-vol1/lucid_chroot-i386 /dev/smoser-vol1/karmic_chroot-i386 \
/dev/smoser-vol1/karmic_chroot-amd64 /dev/smoser-vol1/lucid_chroot-amd64 \
/dev/smoser-vol1/hardy_chroot-amd64


Then, resized the 'smlv0' volume that had my '/archive' on it up to the largest that I could on that physical volume:


$ sudo vgdisplay smoser-vol1
VG Name smoser-vol1
System ID
Format lvm2
<snip>
VG Size 931.51 GiB
...
$ sudo lvresize /dev/smoser-vol1/smlv0 --size 931.51G
Rounding up size to full physical extent 931.51 GiB
Extending logical volume smlv0 to 931.51 GiB
Logical volume smlv0 successfully resized


Then, just resize the ext4 filesystem on that volume:

$ grep archive /proc/mounts
/dev/mapper/smoser--vol1-smlv0 /archive ext4 rw,relatime,barrier=1,data=ordered 0 0
$ sudo resize2fs /dev/mapper/smoser--vol1-smlv0
resize2fs 1.41.11 (14-Mar-2010)
Filesystem at /dev/mapper/smoser--vol1-smlv0 is mounted on /archive; on-line resizing required
old desc_blocks = 37, new_desc_blocks = 59
Performing an on-line resize of /dev/mapper/smoser--vol1-smlv0 to 244190208 (4k) blocks.

The filesystem on /dev/mapper/smoser--vol1-smlv0 is now 244190208 blocks long.


That last operation did take probably 30 minutes, but in the end, I now have:


$ df -h /archive/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/smoser--vol1-smlv0
917G 544G 327G 63% /archive