Wednesday, November 3, 2010

create image with XFS root filesystem from UEC Images

A post was made to the ec2ubuntu google group asking if Ubuntu had any plans to create images XFS root filesystems.

The Official Ubuntu Images for 10.04 LTS (lucid) and prior have an ext3 root filesystem. For Ubuntu 10.10 (maverick) and the development builds of 11.04, the filesystem is ext4.

The selection of ext3 or ext4 is because ext4 is the filesystem selected by default on an Ubuntu install from CD or DVD. This selection is carried over to our Ubuntu images for UEC or EC2. The 10.04 images really should have been ext4, but the change didn't get in for that release.

Ubuntu fully supports the XFS filesystem, it simply wasn't chosen as the default. The -virtual kernel has filesystem support available as a module, and the xfsprogs package is in the main archive.

So, just as you can get full support for the Ubuntu images using ext4, you can get full support from Ubuntu (and paid support from Canonical) by using xfs as your root filesystem. You will simply have to create your own images.

Luckily, due primarily due to the fact that the Ubuntu images are downloadable at, the process for creating a XFS based ebs image is trivial.

In the code (shell prompt) snippits below, '$' prompt indicates command run on my laptop. '%' prompt indicates command run on the ec2 instance. lines beginning with a '#' are comments.

Launch an instance to work with:

# us-east-1 ami-688c7801 canonical ubuntu-maverick-10.10-amd64-server-20101007.1
$ ec2-run-instances --region us-east-1 --instance-type m1.large \
--key mykey ami-688c7801
$ iid=i-bcc679d1
$ zone=$(ec2-describe-instances $iid |
awk '-F\t' '$2 == iid { print $12 }' iid=${iid} )
$ echo ${zone}
$ host=$(ec2-describe-instances $iid |
awk '-F\t' '$2 == iid { print $4 }' iid=${iid} )
$ echo ${host}

create a volume in correct zone of the desired size to attach to the instance.

$ ec2-create-volume --size 10 --availability-zone ${zone}
$ vol=vol-c64d55af
$ ec2-attach-volume --instance ${iid} --device /dev/sdh ${vol}

Then, ssh to ubuntu@${host}, and download the uec reference image, extract, and get necessary packages:

% sudo chown ubuntu:ubuntu /mnt
% cd /mnt
% url=
% tarball=${url##*/}
% wget ${url} -O ${tarball}
% tar -Sxvzf ${tarball}
% img=maverick-server-uec-i386.img
% mkdir src target
% sudo apt-get install xfsprogs

create target filesystem, mount the attached volume, and copy source filesystem contents to target filesystem using rsync.

% sudo mount -o loop,ro ${img} /mnt/src
% sudo mkfs.xfs -L uec-rootfs /dev/sdh
% sudo mount /dev/sdh /mnt/target
% sudo rsync -aXHAS /mnt/src/ /mnt/target
% sudo umount /mnt/target
% sudo umount /mnt/src

Above, you could have mounted /proc and /sys into /mnt/target, chrooted into it and done a dist-upgrade. I left that out for simplicity.

Now, back on the laptop, snapshot the volume.

$ ec2-create-snapshot ${vol}
$ snap=snap-b97dfdd3
# now you have to wait for snapshot to be 'completed'
$ ec2-describe-snapshots ${snap}
SNAPSHOT snap-b97dfdd3 vol-c64d55af completed 2010-11-03T17:31:52+0000 100% 950047163771 10

Turn the contents of that volume into an AMI. Note, you must set 'arch', 'rel', and 'region' correctly. Then, we use that information to get the aki associated with the most recent released Ubuntu image.

$ rel=maverick; region=us-east-1; arch=i386; # arch=amd64
$ [ $arch = amd64 ] && xarch=x86_64 || xarch=${arch}
$ [ $arch = amd64 ] && blkdev=/dev/sdb || blkdev=/dev/sda2
$ qurl=${rel}/server/released.current.txt
$ aki=$(curl --silent "${qurl}" |
awk '-F\t' '$5 == "ebs" && $6 == arch && $7 == region { print $9 }' \
arch=$arch region=$region )
$ echo ${aki}
$ ec2-register --snapshot ${snap} \
--architecture=${xarch} --kernel=${aki} \
--block-device-mapping ${blkdev}=ephemeral0 \
--name "my-${rel}-xfs-root" --description "my-${rel}-xfs-description"
IMAGE ami-4483742d
$ ami=ami-4483742d

Clean up your instance and volume

$ ec2-detach-volume ${vol}
$ ec2-terminate-instances ${iid}
$ ec2-delete-volume ${vol}

And now run your instance

$ ec2-run-instances --instance-type t1.micro ${ami}

ssh to your instance, verify that it is in fact xfs:

% grep uec-rootfs /proc/mounts
/dev/disk/by-label/uec-rootfs / xfs rw,relatime,attr2,nobarrier,noquota 0 0

Now, your newly created image has filesystem contents that are identical to those of the official Ubuntu images.

Some notes on the above:
  • Many people believe that transition to btrfs as the default filesytem is inevitable, possibly even for the 12.04 LTS release. Doing this on EC2 would require that amazon release btrfs support in a pv-grub kernel.
  • Outside of creating a 'xfs' filesystem, the steps above are very generic "create a custom EBS root image" instructions. In fact, the process outlined above is used for the actual publishing of ebs images via the ec2-publishing-scripts (see ec2-image2ebs)
  • .
  • The process above will work using the maverick based images. Lucid images are not likely to work out of the box because they do not boot with a ramdisk. Where maverick images use pv-grub to load the kernel and ramdisk from inside the image, Lucid kernels are loaded by xen directly and Canonical did not publish ramdisks for the lucid release.


  1. Scott, please accept a huge thanks for this. It worked great for me. - Shiraz

  2. I'm trying to follow this procedure using Oneiric on a micro instance. Everything goes well until I try to launch the new XFS instance. I get this error:

    /dev/disk/by-label/cloudimg-rootfs does not exist. Dropping to a shell!

    So I tried adding a label "cloudimg-rootfs" to my XFS volume but it complained that it was too many characters (12 max).

    Is there any way to change the label that is used on boot by the the AMI? Maybe a user data option or something?

  3. I should specify that I'm using ami-a7f539ce as my initial Oneiric AMI.

  4. @Peter: try

    @Scott: Thanks a lot!

  5. Hmmm... from your link, it would seem
    "Ubuntu´s Maverick and Natty AMIs for EC2 have already been fixed, the label there is uec-rootfs."

    but for some reason, oneiric has gone back to cloudimg-rootfs. Was there a reason for this?

  6. Hi,
    I'm sorry that the change to 'cloudimg-rootfs' as the root filesystem label caused problems. It was part of a removal of the "uec" brand. I wish it had gone more smoothly.
    As rforge mentioned above in his blog entry, you'll have to shorten the label for xfs, and update the places inside the filesystem where that string exists.
    The places I'm aware of are:
    * /boot/grub/menu.lst
    * /boot/grub/grub.cfg
    * /etc/fstab
    For EC2, I think that only menu.lst is required, but it makes sense to correct it in the other places also.