Wednesday, August 14, 2013

lxc with fast cloning via overlayfs and userdata via clone hooks

Serge Hallyn and Stéphane Graber made some really nice improvements to LXC in the last few months.  These include:
  • user namespaces, which will bring us secure containers in 14.04 and the ability to safely run containers without root. 
  • a library with bindings for C, go and python3.
  • cloning with overlayfs
  • hooks executed at clone time.
I had previously worked with Ben Howard on the 'ubuntu cloud' template, and I just finished some updates to it that take advantage of overlayfs and clone hooks to provide a great environment to use or test cloud-init.

Previously the ubuntu cloud template (which downloads a cloud image to create a container) allowed the user to specify userdata or public keys at container creation time.  The change was really just to move the container customization code to a clone hook.

Thanks to the daily build ppa, you can do this on any release from 12.04 to 13.10.

Hopefully the example below explains this better.  The times reported are from my Thinkpad X120e, which is a netbook class cpu and slow disk.  Times clearly will vary, and these are not meant to be scientific results.
If you do not see the embedded file below, please read the remainder of this post in the gist on github.
###
### Fast cloning with overlayfs, and specifying user-data on clone
### blog post:
### http://ubuntu-smoser.blogspot.com/2013/08/lxc-with-fast-cloning-via-overlayfs-and.html
###
### Eventually, this should make it into 13.10 and the stable lxc ppa
### https://launchpad.net/~ubuntu-lxc/+archive/stable
### But right now you'll have to use the daily ppa.
$ sudo apt-add-repository -y ppa:ubuntu-lxc/daily
$ sudo apt-get update --quiet
$ sudo apt-get install --assume-yes --quiet lxc
### Now, create a pristine 'source' container which we'll clone repeatedly
### The create will download a tarball of the root filesystem from
### http://cloud-images.ubuntu.com and extract it, so this will take some
### time. Subsequent create's will use the cached download.
$ sudo lxc-create -n source-precise-amd64 -t ubuntu-cloud -- \
--release=precise --arch=amd64
### Compare to clone and delete a container with and without overlayfs
###
$ TIMEFORMAT=$'real: %3R user: %3U system: %3S'
### First the old method.
$ time sudo lxc-clone -o source-precise-amd64 -n test1
Created container test1 as copy of source-precise-amd64
real: 29.842 user: 17.392 system: 21.616
$ time sudo lxc-destroy -n test1
real: 2.766 user: 0.184 system: 2.528
## Now using overlayfs snapshot
$ time sudo lxc-clone --snapshot -B overlayfs -o source-precise-amd64 -n test1
Created container test1 as snapshot of source-precise-amd64
real: 0.143 user: 0.024 system: 0.044
$ time sudo lxc-destroy -n test1
real: 0.044 user: 0.008 system: 0.028
###
### Its clear that the clone and destroy were more than a little bit faster.
### 29 seconds to 0.14 seconds, and 2.8 to 0.04 seconds respectively.
###
### Now, lets see about performance of booting a system, and demonstrate
### passing user-data.
###
### You can see options you can pass to 'clone' for the ubuntu-cloud
### clone with '/usr/share/lxc/hooks/ubuntu-cloud-prep --help'
### The most useful are probably '--userdata' and '--auth-key'
### use a user-data script that just powers off to time boot to shutdown
$ printf "%s\n%s\n" '#!/bin/sh' '/sbin/poweroff' > my-userdata
### clone then start without overlayfs
$ sudo lxc-clone -o source-precise-amd64 -n p1 \
-- --userdata=my-userdata
$ time sudo lxc-start -n p1
<4>init: hwclock main process (6) terminated with status 77
real: 14.137 user: 10.804 system: 1.468
### clone then start with overlayfs
$ sudo lxc-clone -o source-precise-amd64 --snapshot -B overlayfs -n p2 \
-- --userdata=my-userdata
$ time sudo lxc-start -n p2
<4>init: hwclock main process (6) terminated with status 77
...
* Will now halt
real: 12.489 user: 10.944 system: 1.672
### So, we see above that overlayfs start/stop was a bit faster.
### I think those differences are inside the realm of noise, but
### they do demonstrate that at least for this test there was not
### a huge cost for the benefit of overlayfs.

Tuesday, July 23, 2013

Using Ubuntu cloud images on VirtualBox

A few months ago, I wrote an article on "Using Ubuntu cloud-images without a cloud".  That article showed how to do this with kvm.  Virtualbox is another virtualiztion technology that is especially popular on Mac and Windows.  So, I figured I'd give the same basic article a try, but using virtualbox rather than kvm.

I used 'virtualboxmanage' and 'virtualbox' here, but I'm sure the same can be accomplished via the virtuabox gui, and probably similar commands to do the same thing on Mac/OSX or windows.

So, below is roughly the same thing as the kvm post but with virtualbox.  Also to verify function, I did this on Ubuntu 12.04 host with Ubuntu 12.04 guest, but later versions should also work.

## Install necessary packages
$ sudo apt-get install virtualbox-ose qemu-utils genisoimage cloud-utils
## get kvm unloaded so virtualbox can load
$ sudo modprobe -r kvm_amd kvm_intel
$ sudo service virtualbox stop
$ sudo service virtualbox start
## URL to most recent cloud image of 12.04
$ img_url="http://cloud-images.ubuntu.com/server/releases/12.04/release"
$ img_url="${img_url}/ubuntu-12.04-server-cloudimg-amd64-disk1.img"
## on precise, cloud-localds is not in archive. just download.
$ localds_url="http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/download/head:/cloudlocalds-20120823015036-zkgo0cswqhhvener-1/cloud-localds"
$ which cloud-localds ||
{ sudo wget "$localds_url" -O /usr/local/bin/cloud-localds &&
sudo chmod 755 /usr/local/bin/cloud-localds; }
## download a cloud image to run, and convert it to virtualbox 'vdi' format
$ img_dist="${img_url##*/}"
$ img_raw="${img_dist%.img}.raw"
$ my_disk1="my-disk1.vdi"
$ wget $img_url -O "$img_dist"
$ qemu-img convert -O raw "${img_dist}" "${img_raw}"
$ vboxmanage convertfromraw "$img_raw" "$my_disk1"
## create user-data file and a iso file with that user-data on it.
$ seed_iso="my-seed.iso"
$ cat > my-user-data <<EOF
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
EOF
$ cloud-localds "$seed_iso" my-user-data
##
## create a virtual machine using vboxmanage
##
$ vmname="precise-nocloud-1"
$ vboxmanage createvm --name "$vmname" --register
$ vboxmanage modifyvm "$vmname" \
--memory 512 --boot1 disk --acpi on \
--nic1 nat --natpf1 "guestssh,tcp,,2222,,22"
## Another option for networking would be:
## --nic1 bridged --bridgeadapter1 eth0
$ vboxmanage storagectl "$vmname" --name "IDE_0" --add ide
$ vboxmanage storageattach "$vmname" \
--storagectl "IDE_0" --port 0 --device 0 \
--type hdd --medium "$my_disk1"
$ vboxmanage storageattach "$vmname" \
--storagectl "IDE_0" --port 1 --device 0 \
--type dvddrive --medium "$seed_iso"
$ vboxmanage modifyvm "$vmname" \
--uart1 0x3F8 4 --uartmode1 server my.ttyS0
## start up the VM
$ vboxheadless --vnc --startvm "$vmname"
## You should be able to connect to the vnc port that vboxheadless
## showed was used. The default would be '5900', so 'xvcviewer :5900'
## to connect.
##
## Also, after the system boots, you can ssh in with 'ubuntu:passw0rd'
## via 'ssh -p 2222 ubuntu@localhost'
##
## To see the serial console, where kernel output goes, you
## can use 'socat', like this:
## socat UNIX:my.socket -
## vboxmanage controlvm "$vmname" poweroff
$ vboxmanage controlvm "$vmname" poweroff
## clean up after ourselves
$ vboxmanage storageattach "$vmname" \
--storagectl "IDE_0" --port 0 --device 0 --medium none
$ vboxmanage storageattach "$vmname" \
--storagectl "IDE_0" --port 1 --device 0 --medium none
$ vboxmanage closemedium dvd "${seed_iso}"
$ vboxmanage closemedium disk "${my_disk1}"
$ vboxmanage unregistervm $vmname --delete

Monday, February 11, 2013

Using Ubuntu cloud-images without a cloud

Since sometime in early 2009, we've put effort into building the Ubuntu cloud images and making them useful as "cloud images". From the beginning, they supported use as an instance on a cloud platform. Initially that was limited to EC2 and Eucalyptus, but over time, we've extended the "Data Sources" that the images support.

A "Data Source" to cloud-init provides 2 essential bits of information that turn a generic cloud-image into a cloud instance that is actually usable to its creator. Those are:
  • public ssh key
  • user-data
Without these, the cloud image cannot even be logged into.

Very early on it felt like we should have a way to use these images outside of a cloud. They were essentially ready-to-use installations of Ubuntu Server that allow you to bypass installation. In 11.04 we added the OVF as a data source and a tool in cloud-init's source tree for creating a OVF ISO Transport that cloud-init would read data from. It wasn't until 12.04 that we improved the "NoCloud" data source to make this even easier.

Available in cloud-utils, and packaged in Ubuntu 12.10 is a utility named 'cloud-localds'. This makes it trivial to create a "local datasource" that the cloud-images will then use to get the ssh key and/or user-data described above.

## Install a necessary packages
$ sudo apt-get install kvm cloud-utils genisoimage
## URL to most recent cloud image of 12.04
$ img_url="http://cloud-images.ubuntu.com/server/releases/12.04/release"
$ img_url="${img_url}/ubuntu-12.04-server-cloudimg-amd64-disk1.img"
## download the image
$ wget $img_url -O disk.img.dist
## Create a file with some user-data in it
$ cat > my-user-data <<EOF
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
EOF
## Convert the compressed qcow file downloaded to a uncompressed qcow2
$ qemu-img convert -O qcow2 disk.img.dist disk.img.orig
## create the disk with NoCloud data on it.
$ cloud-localds my-seed.img my-user-data
## Create a delta disk to keep our .orig file pristine
$ qemu-img create -f qcow2 -b disk.img.orig disk.img
## Boot a kvm
$ kvm -net nic -net user -hda disk.img -hdb my-seed.img -m 512
view raw gistfile1.sh hosted with ❤ by GitHub
After boot, you should see a login prompt that you can log into with 'ubuntu' and 'passw0rd' as specified by the user-data provided.

Some notes about the above:
  •  None of the commands other than 'apt-get install' require root.
  •  The 2 qemu-img commands are not strictly necessary. 
    • The 'convert' converts the compressed qcow2 disk image as downloaded to an uncompressed version.  If you don't do this the image will still boot, but reads will go decompression.
    • The 'create' creates a new qcow2 delta image backed by 'disk1.img.orig'. It is not necessary, but useful to keep the '.orig' file pristine. All writes in the kvm instance will go to the disk.img file.
  • libvirt, different kvm networking or disk could have been used. The kvm command above is just the simplest for demonstration. (I'm a big fan of the '-curses' option to kvm.)
  • In the kvm command above, you'll need to hit 'ctrl-alt-3' to see kernel boot messages and boot progress. That is because the cloud images by default send console output to the first serial device, that a cloud provider is likely to log.
  • There is no default password in the Ubuntu images. The password was set by the user-data provided.
The content of 'my-user-data' can actually be anything that cloud-init supports as user-data.  So any custom user-data you have can be used (or developed) in this way.