Cloud-init is the subject for the most recent episode of Podcast.__init__.
Go and have a listen to Episode 126.
I really enjoyed talking to Tobias about cloud-init and some of the difficulties of the project and goals for the future.
Enjoy!
Monday, September 11, 2017
Thursday, December 11, 2014
Snappy Ubuntu Core and cloud-init
Snappy Ubuntu Core was announced this week. In yesterday's blog post (Snappy Ubuntu Core and uvtool) I showed how you can use uvtool to create and manage snappy instances.
Now that we've got that covered, let’s look deeper into a very cool feature - the ability to customize the instance and automate its startup and configuration. For example, at instance creation time you can specify a snappy application to be installed. cloud-init is what allows you to do this, and it is installed inside the Snappy image. cloud-init receives this information from the user in the form of 'user-data'.
One of the formats that can be fed to cloud-init is called ‘cloud-config’. cloud-config is yaml formatted data that is interpreted and acted on by cloud-init. For Snappy, we’ve added a couple specific configuration values. Those are included under the top level 'snappy'.
$ uvt-kvm create --wait --add-user-data=my-config.yaml snappy1 release=devel
Our user-data instructed cloud-init to do a number of different things. First, it wrote a file via 'write_files' to a writable space on disk, and then executed that file with 'runcmd'. Lets verify that was done:
$ uvt-kvm ssh snappy1 cat /run/hello.log
==== Hello Snappy! It is now Thu, 11 Dec 2014 18:16:34 +0000 ====
It also instructed cloud-init to install the Snappy 'xkcd-webserver' application.
$ uvt-kvm ssh snappy1 snappy versions
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
There we can see that xkcd-webserver was installed, lets check that it is running:
$ uvt-kvm ip snappy1
192.168.122.80
$ wget -O - --quiet http://192.168.122.80/ | grep <title>
<title>XKCD rocks!</title>
$ imgid=b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-core-devel-amd64-20141209-90-en-us-30GB
$ azure vm create snappy-test $imgid ubuntu \
--location "North Europe" --no-ssh-password \
--ssh-cert ~/.ssh/azure_pub.pem --ssh \
--custom-data my-config.yaml
Have fun playing with cloud-init!
Now that we've got that covered, let’s look deeper into a very cool feature - the ability to customize the instance and automate its startup and configuration. For example, at instance creation time you can specify a snappy application to be installed. cloud-init is what allows you to do this, and it is installed inside the Snappy image. cloud-init receives this information from the user in the form of 'user-data'.
One of the formats that can be fed to cloud-init is called ‘cloud-config’. cloud-config is yaml formatted data that is interpreted and acted on by cloud-init. For Snappy, we’ve added a couple specific configuration values. Those are included under the top level 'snappy'.
- ssh_enabled: determines if 'ssh' service is started or not. By default ssh is not enabled.
- packages: A list of snappy packages to install on first boot. Items in this list are snappy package names.
- runcmd: A list of commands run after boot has been completed. Commands are run as root. Each entry in the list can be a string or a list. If the entry is a string, it is interpreted by 'sh'. If it is a list, it is executed as a command and arguments without shell interpretation.
- ssh_authorized_keys: This is a list of strings. Each key present will be put into the default user's ssh authorized keys file. Note that ssh authorized keys are also accepted via the cloud’s metadata service.
- write_files: this allows you to write content to the filesystem. The module is still expected to work, but the user will have to be aware that much of the filesystem is read-only. Specifically, writing to file system locations that are not writable is expected to fail.
Example Cloud Config
Its always easiest to start from a working example. Below is one that demonstrates the usage of the config options listed above. Please note that user data intended to be consumed as cloud-config must contain the first line '#cloud-config'.- #cloud-config
snappy:
ssh_enabled: True
packages:
- xkcd-webserver
write_files:
- content: |
#!/bin/sh
echo "==== Hello Snappy! It is now $(date -R) ===="
permissions: '0755'
path: /writable/greet
runcmd:
- /writable/greet | tee /run/hello.log
Launching with uvtool
Follow yesterday's blog post to get a functional tool. Then, save the example config file above to a file, and launch you're instance with it.$ uvt-kvm create --wait --add-user-data=my-config.yaml snappy1 release=devel
Our user-data instructed cloud-init to do a number of different things. First, it wrote a file via 'write_files' to a writable space on disk, and then executed that file with 'runcmd'. Lets verify that was done:
$ uvt-kvm ssh snappy1 cat /run/hello.log
==== Hello Snappy! It is now Thu, 11 Dec 2014 18:16:34 +0000 ====
It also instructed cloud-init to install the Snappy 'xkcd-webserver' application.
$ uvt-kvm ssh snappy1 snappy versions
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
There we can see that xkcd-webserver was installed, lets check that it is running:
$ uvt-kvm ip snappy1
192.168.122.80
$ wget -O - --quiet http://192.168.122.80/ | grep <title>
<title>XKCD rocks!</title>
Launching on Azure
The same user-data listed above also works on Microsoft Azure. Follow the instructions for setting up the azure command line tools, and then launch the instance with and provide the '--custom-data' flag. A full command line might look like:$ imgid=b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-core-devel-amd64-20141209-90-en-us-30GB
$ azure vm create snappy-test $imgid ubuntu \
--location "North Europe" --no-ssh-password \
--ssh-cert ~/.ssh/azure_pub.pem --ssh \
--custom-data my-config.yaml
Have fun playing with cloud-init!
Wednesday, December 10, 2014
Snappy Ubuntu Core and uvtool
Earlier this week, Ubuntu announced the Snappy Ubuntu Core . As part of the announcement, a set of qemu based instructions were included for checking out a snappy image on your local system. In addition to that method, we’ve been working on updates to bring support for the transactional images to uvtool. Have you used uvtool before? I like it, and tend to use for day to day kvm images as it’s pretty simple. So let’s get to it.
As I’ve already mentioned Ubuntu has a very simple set of tools for creating virtual machines using cloud images, called 'uvtool'. Uvtool offers a easy way to bring up images on your system in a kvm environment. Before we use uvtool to get snappy on your local environment, you’ll need install the special version that has snappy supported added to it:
$ sudo apt-add-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get install uvtool
$ newgrp libvirtd
You only need to do 'newgrp libvirtd' during the initial setup, and only if you were not already in the libvirtd group which you can check by running the 'groups' command. A reboot or logout would have the same effect.
uvtool uses ssh key authorization so that you can connect to your instances without being prompted for a password. If you do not have a ssh key in '~/.ssh/id_rsa.pub', you can create one now with:
$ ssh-keygen
We’re ready to roll. Let’s download the images:
$ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel
This will download a pre-made cloud image of the latest Snappy Core image from http://cloud-images.ubuntu.com/snappy/. It will download about 110M, so be prepared to wait a little bit.
Now let’s start up an instance called 'snappy-test':
$ uvt-kvm create --wait snappy-test flavor=core
This will do the magic of setting up a libvirt domain, starting it and waiting for it to boot (via the --wait flag). Time to ssh into it:
$ uvt-kvm ssh snappy-test
You now have a Snappy image which you’re sshd into.
If you want to manually ssh, or test that your snappy install of xkcd-webserver worked, you can get the IP address of the system with:
$ uvt-kvm ip snappy-test
192.168.122.136
When you're done playing, just destroy the instance with:
$ uvt-kvm destroy snappy-test
Have fun!
Setting up a local Snappy Ubuntu Core environment with uvtool
As I’ve already mentioned Ubuntu has a very simple set of tools for creating virtual machines using cloud images, called 'uvtool'. Uvtool offers a easy way to bring up images on your system in a kvm environment. Before we use uvtool to get snappy on your local environment, you’ll need install the special version that has snappy supported added to it:
$ sudo apt-add-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get install uvtool
$ newgrp libvirtd
You only need to do 'newgrp libvirtd' during the initial setup, and only if you were not already in the libvirtd group which you can check by running the 'groups' command. A reboot or logout would have the same effect.
uvtool uses ssh key authorization so that you can connect to your instances without being prompted for a password. If you do not have a ssh key in '~/.ssh/id_rsa.pub', you can create one now with:
$ ssh-keygen
We’re ready to roll. Let’s download the images:
$ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel
This will download a pre-made cloud image of the latest Snappy Core image from http://cloud-images.ubuntu.com/snappy/. It will download about 110M, so be prepared to wait a little bit.
Now let’s start up an instance called 'snappy-test':
$ uvt-kvm create --wait snappy-test flavor=core
This will do the magic of setting up a libvirt domain, starting it and waiting for it to boot (via the --wait flag). Time to ssh into it:
$ uvt-kvm ssh snappy-test
You now have a Snappy image which you’re sshd into.
If you want to manually ssh, or test that your snappy install of xkcd-webserver worked, you can get the IP address of the system with:
$ uvt-kvm ip snappy-test
192.168.122.136
When you're done playing, just destroy the instance with:
$ uvt-kvm destroy snappy-test
Have fun!
Thursday, August 14, 2014
mount-image-callback: easily modify the content of a disk image file
Lots of times when dealing with virtualization or cloud you find yourself working with "images". Operating on these images can be difficult. As an example, often times you may have a disk image, and need to change a single file inside it.
There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).
Heres how you can use it interactively:
$ mount-image-callback disk1.img -- chroot _MOUNTPOINT_
% echo "I'm chrooted inside the image here"
% echo "192.168.1.1 www.example.com" >> /etc/hosts
% exit 0
or non-interactively:
mount-image-callback disk1.img -- \
sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'
or one of my typical use cases, to add a package to an image.
mount-image-callback --system-mounts --resolv-conf --
chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit
Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.
If the command you provide has the literal argument '_MOUNTPOINT_' then it will substitute the path to the mount. It also makes that path available in the environment variable MOUNTPOINT. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.
mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.
Why not libguestfs?
libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.
Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.
In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.
There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).
Heres how you can use it interactively:
$ mount-image-callback disk1.img -- chroot _MOUNTPOINT_
% echo "I'm chrooted inside the image here"
% echo "192.168.1.1 www.example.com" >> /etc/hosts
% exit 0
or non-interactively:
mount-image-callback disk1.img -- \
sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'
or one of my typical use cases, to add a package to an image.
mount-image-callback --system-mounts --resolv-conf --
chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit
Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.
If the command you provide has the literal argument '_MOUNTPOINT_' then it will substitute the path to the mount. It also makes that path available in the environment variable MOUNTPOINT. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.
mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.
Why not libguestfs?
libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.
Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.
In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.
Wednesday, August 14, 2013
lxc with fast cloning via overlayfs and userdata via clone hooks
Serge Hallyn and Stéphane Graber made some really nice improvements to LXC in the last few months. These include:
Previously the ubuntu cloud template (which downloads a cloud image to create a container) allowed the user to specify userdata or public keys at container creation time. The change was really just to move the container customization code to a clone hook.
Thanks to the daily build ppa, you can do this on any release from 12.04 to 13.10.
Hopefully the example below explains this better. The times reported are from my Thinkpad X120e, which is a netbook class cpu and slow disk. Times clearly will vary, and these are not meant to be scientific results. If you do not see the embedded file below, please read the remainder of this post in the gist on github.
- user namespaces, which will bring us secure containers in 14.04 and the ability to safely run containers without root.
- a library with bindings for C, go and python3.
- cloning with overlayfs
- hooks executed at clone time.
Previously the ubuntu cloud template (which downloads a cloud image to create a container) allowed the user to specify userdata or public keys at container creation time. The change was really just to move the container customization code to a clone hook.
Thanks to the daily build ppa, you can do this on any release from 12.04 to 13.10.
Hopefully the example below explains this better. The times reported are from my Thinkpad X120e, which is a netbook class cpu and slow disk. Times clearly will vary, and these are not meant to be scientific results. If you do not see the embedded file below, please read the remainder of this post in the gist on github.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### | |
### Fast cloning with overlayfs, and specifying user-data on clone | |
### blog post: | |
### http://ubuntu-smoser.blogspot.com/2013/08/lxc-with-fast-cloning-via-overlayfs-and.html | |
### | |
### Eventually, this should make it into 13.10 and the stable lxc ppa | |
### https://launchpad.net/~ubuntu-lxc/+archive/stable | |
### But right now you'll have to use the daily ppa. | |
$ sudo apt-add-repository -y ppa:ubuntu-lxc/daily | |
$ sudo apt-get update --quiet | |
$ sudo apt-get install --assume-yes --quiet lxc | |
### Now, create a pristine 'source' container which we'll clone repeatedly | |
### The create will download a tarball of the root filesystem from | |
### http://cloud-images.ubuntu.com and extract it, so this will take some | |
### time. Subsequent create's will use the cached download. | |
$ sudo lxc-create -n source-precise-amd64 -t ubuntu-cloud -- \ | |
--release=precise --arch=amd64 | |
### Compare to clone and delete a container with and without overlayfs | |
### | |
$ TIMEFORMAT=$'real: %3R user: %3U system: %3S' | |
### First the old method. | |
$ time sudo lxc-clone -o source-precise-amd64 -n test1 | |
Created container test1 as copy of source-precise-amd64 | |
real: 29.842 user: 17.392 system: 21.616 | |
$ time sudo lxc-destroy -n test1 | |
real: 2.766 user: 0.184 system: 2.528 | |
## Now using overlayfs snapshot | |
$ time sudo lxc-clone --snapshot -B overlayfs -o source-precise-amd64 -n test1 | |
Created container test1 as snapshot of source-precise-amd64 | |
real: 0.143 user: 0.024 system: 0.044 | |
$ time sudo lxc-destroy -n test1 | |
real: 0.044 user: 0.008 system: 0.028 | |
### | |
### Its clear that the clone and destroy were more than a little bit faster. | |
### 29 seconds to 0.14 seconds, and 2.8 to 0.04 seconds respectively. | |
### | |
### Now, lets see about performance of booting a system, and demonstrate | |
### passing user-data. | |
### | |
### You can see options you can pass to 'clone' for the ubuntu-cloud | |
### clone with '/usr/share/lxc/hooks/ubuntu-cloud-prep --help' | |
### The most useful are probably '--userdata' and '--auth-key' | |
### use a user-data script that just powers off to time boot to shutdown | |
$ printf "%s\n%s\n" '#!/bin/sh' '/sbin/poweroff' > my-userdata | |
### clone then start without overlayfs | |
$ sudo lxc-clone -o source-precise-amd64 -n p1 \ | |
-- --userdata=my-userdata | |
$ time sudo lxc-start -n p1 | |
<4>init: hwclock main process (6) terminated with status 77 | |
real: 14.137 user: 10.804 system: 1.468 | |
### clone then start with overlayfs | |
$ sudo lxc-clone -o source-precise-amd64 --snapshot -B overlayfs -n p2 \ | |
-- --userdata=my-userdata | |
$ time sudo lxc-start -n p2 | |
<4>init: hwclock main process (6) terminated with status 77 | |
... | |
* Will now halt | |
real: 12.489 user: 10.944 system: 1.672 | |
### So, we see above that overlayfs start/stop was a bit faster. | |
### I think those differences are inside the realm of noise, but | |
### they do demonstrate that at least for this test there was not | |
### a huge cost for the benefit of overlayfs. | |
Tuesday, July 23, 2013
Using Ubuntu cloud images on VirtualBox
A few months ago, I wrote an article on "Using Ubuntu cloud-images without a cloud". That article showed how to do this with kvm. Virtualbox is another virtualiztion technology that is especially popular on Mac and Windows. So, I figured I'd give the same basic article a try, but using virtualbox rather than kvm.
I used 'virtualboxmanage' and 'virtualbox' here, but I'm sure the same can be accomplished via the virtuabox gui, and probably similar commands to do the same thing on Mac/OSX or windows.
So, below is roughly the same thing as the kvm post but with virtualbox. Also to verify function, I did this on Ubuntu 12.04 host with Ubuntu 12.04 guest, but later versions should also work.
I used 'virtualboxmanage' and 'virtualbox' here, but I'm sure the same can be accomplished via the virtuabox gui, and probably similar commands to do the same thing on Mac/OSX or windows.
So, below is roughly the same thing as the kvm post but with virtualbox. Also to verify function, I did this on Ubuntu 12.04 host with Ubuntu 12.04 guest, but later versions should also work.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Install necessary packages | |
$ sudo apt-get install virtualbox-ose qemu-utils genisoimage cloud-utils | |
## get kvm unloaded so virtualbox can load | |
$ sudo modprobe -r kvm_amd kvm_intel | |
$ sudo service virtualbox stop | |
$ sudo service virtualbox start | |
## URL to most recent cloud image of 12.04 | |
$ img_url="http://cloud-images.ubuntu.com/server/releases/12.04/release" | |
$ img_url="${img_url}/ubuntu-12.04-server-cloudimg-amd64-disk1.img" | |
## on precise, cloud-localds is not in archive. just download. | |
$ localds_url="http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/download/head:/cloudlocalds-20120823015036-zkgo0cswqhhvener-1/cloud-localds" | |
$ which cloud-localds || | |
{ sudo wget "$localds_url" -O /usr/local/bin/cloud-localds && | |
sudo chmod 755 /usr/local/bin/cloud-localds; } | |
## download a cloud image to run, and convert it to virtualbox 'vdi' format | |
$ img_dist="${img_url##*/}" | |
$ img_raw="${img_dist%.img}.raw" | |
$ my_disk1="my-disk1.vdi" | |
$ wget $img_url -O "$img_dist" | |
$ qemu-img convert -O raw "${img_dist}" "${img_raw}" | |
$ vboxmanage convertfromraw "$img_raw" "$my_disk1" | |
## create user-data file and a iso file with that user-data on it. | |
$ seed_iso="my-seed.iso" | |
$ cat > my-user-data <<EOF | |
#cloud-config | |
password: passw0rd | |
chpasswd: { expire: False } | |
ssh_pwauth: True | |
EOF | |
$ cloud-localds "$seed_iso" my-user-data | |
## | |
## create a virtual machine using vboxmanage | |
## | |
$ vmname="precise-nocloud-1" | |
$ vboxmanage createvm --name "$vmname" --register | |
$ vboxmanage modifyvm "$vmname" \ | |
--memory 512 --boot1 disk --acpi on \ | |
--nic1 nat --natpf1 "guestssh,tcp,,2222,,22" | |
## Another option for networking would be: | |
## --nic1 bridged --bridgeadapter1 eth0 | |
$ vboxmanage storagectl "$vmname" --name "IDE_0" --add ide | |
$ vboxmanage storageattach "$vmname" \ | |
--storagectl "IDE_0" --port 0 --device 0 \ | |
--type hdd --medium "$my_disk1" | |
$ vboxmanage storageattach "$vmname" \ | |
--storagectl "IDE_0" --port 1 --device 0 \ | |
--type dvddrive --medium "$seed_iso" | |
$ vboxmanage modifyvm "$vmname" \ | |
--uart1 0x3F8 4 --uartmode1 server my.ttyS0 | |
## start up the VM | |
$ vboxheadless --vnc --startvm "$vmname" | |
## You should be able to connect to the vnc port that vboxheadless | |
## showed was used. The default would be '5900', so 'xvcviewer :5900' | |
## to connect. | |
## | |
## Also, after the system boots, you can ssh in with 'ubuntu:passw0rd' | |
## via 'ssh -p 2222 ubuntu@localhost' | |
## | |
## To see the serial console, where kernel output goes, you | |
## can use 'socat', like this: | |
## socat UNIX:my.socket - | |
## vboxmanage controlvm "$vmname" poweroff | |
$ vboxmanage controlvm "$vmname" poweroff | |
## clean up after ourselves | |
$ vboxmanage storageattach "$vmname" \ | |
--storagectl "IDE_0" --port 0 --device 0 --medium none | |
$ vboxmanage storageattach "$vmname" \ | |
--storagectl "IDE_0" --port 1 --device 0 --medium none | |
$ vboxmanage closemedium dvd "${seed_iso}" | |
$ vboxmanage closemedium disk "${my_disk1}" | |
$ vboxmanage unregistervm $vmname --delete | |
Monday, February 11, 2013
Using Ubuntu cloud-images without a cloud
Since sometime in early 2009, we've put effort into building the Ubuntu cloud images and making them useful as "cloud images". From the beginning, they supported use as an instance on a cloud platform. Initially that was limited to EC2 and Eucalyptus, but over time, we've extended the "Data Sources" that the images support.
A "Data Source" to cloud-init provides 2 essential bits of information that turn a generic cloud-image into a cloud instance that is actually usable to its creator. Those are:
Very early on it felt like we should have a way to use these images outside of a cloud. They were essentially ready-to-use installations of Ubuntu Server that allow you to bypass installation. In 11.04 we added the OVF as a data source and a tool in cloud-init's source tree for creating a OVF ISO Transport that cloud-init would read data from. It wasn't until 12.04 that we improved the "NoCloud" data source to make this even easier.
Available in cloud-utils, and packaged in Ubuntu 12.10 is a utility named 'cloud-localds'. This makes it trivial to create a "local datasource" that the cloud-images will then use to get the ssh key and/or user-data described above.
After boot, you should see a login prompt that you can log into with 'ubuntu' and 'passw0rd' as specified by the user-data provided.
Some notes about the above:
A "Data Source" to cloud-init provides 2 essential bits of information that turn a generic cloud-image into a cloud instance that is actually usable to its creator. Those are:
- public ssh key
- user-data
Very early on it felt like we should have a way to use these images outside of a cloud. They were essentially ready-to-use installations of Ubuntu Server that allow you to bypass installation. In 11.04 we added the OVF as a data source and a tool in cloud-init's source tree for creating a OVF ISO Transport that cloud-init would read data from. It wasn't until 12.04 that we improved the "NoCloud" data source to make this even easier.
Available in cloud-utils, and packaged in Ubuntu 12.10 is a utility named 'cloud-localds'. This makes it trivial to create a "local datasource" that the cloud-images will then use to get the ssh key and/or user-data described above.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Install a necessary packages | |
$ sudo apt-get install kvm cloud-utils genisoimage | |
## URL to most recent cloud image of 12.04 | |
$ img_url="http://cloud-images.ubuntu.com/server/releases/12.04/release" | |
$ img_url="${img_url}/ubuntu-12.04-server-cloudimg-amd64-disk1.img" | |
## download the image | |
$ wget $img_url -O disk.img.dist | |
## Create a file with some user-data in it | |
$ cat > my-user-data <<EOF | |
#cloud-config | |
password: passw0rd | |
chpasswd: { expire: False } | |
ssh_pwauth: True | |
EOF | |
## Convert the compressed qcow file downloaded to a uncompressed qcow2 | |
$ qemu-img convert -O qcow2 disk.img.dist disk.img.orig | |
## create the disk with NoCloud data on it. | |
$ cloud-localds my-seed.img my-user-data | |
## Create a delta disk to keep our .orig file pristine | |
$ qemu-img create -f qcow2 -b disk.img.orig disk.img | |
## Boot a kvm | |
$ kvm -net nic -net user -hda disk.img -hdb my-seed.img -m 512 |
Some notes about the above:
- None of the commands other than 'apt-get install' require root.
- The 2 qemu-img commands are not strictly necessary.
- The 'convert' converts the compressed qcow2 disk image as downloaded to an uncompressed version. If you don't do this the image will still boot, but reads will go decompression.
- The 'create' creates a new qcow2 delta image backed by 'disk1.img.orig'. It is not necessary, but useful to keep the '.orig' file pristine. All writes in the kvm instance will go to the disk.img file.
- libvirt, different kvm networking or disk could have been used. The kvm command above is just the simplest for demonstration. (I'm a big fan of the '-curses' option to kvm.)
- In the kvm command above, you'll need to hit 'ctrl-alt-3' to see kernel boot messages and boot progress. That is because the cloud images by default send console output to the first serial device, that a cloud provider is likely to log.
- There is no default password in the Ubuntu images. The password was set by the user-data provided.
Subscribe to:
Posts (Atom)