tag:blogger.com,1999:blog-86129001554888510212024-03-15T18:09:55.803-07:00smoser's thoughtsScott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.comBlogger37125tag:blogger.com,1999:blog-8612900155488851021.post-80718160300391680882017-09-11T06:42:00.000-07:002017-09-11T06:42:06.806-07:00Podcast.__init__ episode on cloud-initCloud-init is the subject for the most recent episode of <a href="https://www.podcastinit.com/" target="_blank">Podcast.__init__.</a><br />
Go and have a listen to <a href="http://bit.ly/2eZUsY1" target="_blank">Episode 126</a>.<br />
<br />
I really enjoyed talking to Tobias about cloud-init and some of the difficulties of the project and goals for the future.<br />
<br />
Enjoy!Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-30081840521325999402014-12-11T11:14:00.001-08:002014-12-11T11:14:56.804-08:00Snappy Ubuntu Core and cloud-init<a href="http://www.ubuntu.com/cloud/tools/snappy" target="_blank">Snappy Ubuntu Core</a> was announced this week. In yesterday's blog post (<a href="http://ubuntu-smoser.blogspot.com/2014/12/snappy-ubuntu-core-and-uvtool.html" target="_blank">Snappy Ubuntu Core and uvtool</a>) I showed how you can use uvtool to create and manage snappy instances.<br />
<br />
Now that we've got that covered, let’s look deeper into a very cool feature - the ability to customize the instance and automate its startup and configuration. For example, at instance creation time you can specify a snappy application to be installed. cloud-init is what allows you to do this, and it is installed inside the Snappy image. cloud-init receives this information from the user in the form of 'user-data'.<br />
<br />
One of the formats that can be fed to cloud-init is called ‘cloud-config’. cloud-config is yaml formatted data that is interpreted and acted on by cloud-init. For Snappy, we’ve added a couple specific configuration values. Those are included under the top level 'snappy'.<br />
<ul>
<li><span style="font-family: "Courier New",Courier,monospace;"><b>ssh_enabled</b></span>: determines if 'ssh' service is started or not. By default ssh is not enabled.</li>
<li><b><span style="font-family: "Courier New",Courier,monospace;">packages</span></b>: A list of snappy packages to install on first boot. Items in this list are snappy package names.</li>
</ul>
When running inside snappy, cloud-init still provides many of the features it provides on traditional instances. Some useful configuration entries:<br />
<br />
<ul>
<li><span style="font-family: "Courier New",Courier,monospace;"><b>runcmd</b></span>: A list of commands run after boot has been completed. Commands are run as root. Each entry in the list can be a string or a list. If the entry is a string, it is interpreted by 'sh'. If it is a list, it is executed as a command and arguments without shell interpretation.</li>
<li><span style="font-family: "Courier New",Courier,monospace;"><b>ssh_authorized_keys</b><span style="font-family: inherit;">:</span></span><span style="font-family: inherit;"> This is a list of strings. Each key present will be put into the default user's ssh authorized keys file. Note that ssh authorized keys are also accepted via the cloud’s metadata service.</span></li>
<li><span style="font-family: inherit;"><span style="font-family: "Courier New",Courier,monospace;"><b>write_files</b></span>: this allows you to write content to the filesystem. The module is still expected to work, but the user will have to be aware that much of the filesystem is read-only. Specifically, writing to file system locations that are not writable is expected to fail.</span></li>
</ul>
<span style="font-family: inherit;">Some cloud-init config modules are simply not going to work. For example, traditional packages will not be installed by 'apt' as the root filesystem is read-only.</span><br />
<br />
<h3>
Example Cloud Config</h3>
Its always easiest to start from a working example. Below is one that demonstrates the usage of the config options listed above. Please note that user data intended to be consumed as cloud-config must contain the first line '<span style="font-family: "Courier New",Courier,monospace;">#cloud-config</span>'.<br />
<ul><span style="font-family: "Courier New",Courier,monospace;">#cloud-config<br />snappy:<br /> ssh_enabled: True<br /> packages:<br /> - xkcd-webserver<br /><br />write_files:<br /> - content: |<br /> #!/bin/sh<br /> echo "==== Hello Snappy! It is now $(date -R) ===="<br /> permissions: '0755'<br /> path: /writable/greet<br /><br />runcmd:<br /> - /writable/greet | tee /run/hello.log</span></ul>
<h3>
Launching with uvtool</h3>
Follow yesterday's blog post to get a functional tool. Then, save the example config file above to a file, and launch you're instance with it.<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New", Courier, monospace;">$ uvt-kvm create --wait --add-user-data=my-config.yaml snappy1 release=devel</span></span><br />
<br />
Our user-data instructed cloud-init to do a number of different things. First, it wrote a file via 'write_files' to a writable space on disk, and then executed that file with 'runcmd'. Lets verify that was done:<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New", Courier, monospace;">$ uvt-kvm ssh snappy1 cat /run/hello.log</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New", Courier, monospace;">==== Hello Snappy! It is now Thu, 11 Dec 2014 18:16:34 +0000 ====</span></span><br />
<br />
<span style="font-family: inherit;">It also instructed cloud-init to install the Snappy 'xkcd-webserver' application.</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="background-color: #eeeeee;">$ uvt-kvm ssh snappy1 snappy versions<br />Part Tag Installed Available Fingerprint Active <br />ubuntu-core edge 141 - 7f068cb4fa876c * <br />xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *</span></span><br />
<br />
<span style="font-family: inherit;"><span style="font-family: inherit;">There we can see that xkcd-webserver was installed, <span style="font-family: inherit;">lets check that it is running</span>:</span></span><br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ uvt-kvm ip snappy1</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">192.168.122.80</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ wget -O - --quiet http://192.168.122.80/ | grep <title> </span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;"><title>XKCD rocks!</title></span></span><br />
<br />
<h3>
Launching on Azure</h3>
The same user-data listed above also works on Microsoft Azure. Follow the <a href="http://www.ubuntu.com/cloud/tools/snappy#snappy-azure" target="_blank">instructions </a>for setting up the azure command line tools, and then launch the instance with and provide the '<span style="font-family: "Courier New",Courier,monospace;">--custom-data</span>' flag. A full command line might look like:<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ imgid=</span><span style="font-family: "Courier New",Courier,monospace;">b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-core-devel-amd64-20141209-90-en-us-30GB</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ azure vm create snappy-test $imgid ubuntu \<br /> --location "North Europe" --no-ssh-password \<br /> --ssh-cert ~/.ssh/azure_pub.pem --ssh \</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;"> --custom-data my-config.yaml </span></span><br />
<br />
<br />
Have fun playing with cloud-init!Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com3tag:blogger.com,1999:blog-8612900155488851021.post-81717513318234411782014-12-10T09:19:00.000-08:002014-12-10T10:28:09.283-08:00Snappy Ubuntu Core and uvtoolEarlier this week, Ubuntu <a href="http://www.markshuttleworth.com/archives/1434" target="_blank">announced</a> the <a href="http://www.ubuntu.com/cloud/tools/snappy" target="_blank">Snappy Ubuntu Core</a> . As part of the announcement, a set of qemu based instructions were included for checking out a snappy image on your local system. In addition to that method, we’ve been working on updates to bring support for the transactional images to uvtool. Have you used uvtool before? I like it, and tend to use for day to day kvm images as it’s pretty simple. So let’s get to it.<br />
<br />
<h3>
Setting up a local Snappy Ubuntu Core environment with uvtool</h3>
<br />
As I’ve already mentioned Ubuntu has a very simple set of tools for creating virtual machines using cloud images, called '<a href="https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html" target="_blank">uvtool</a>'. Uvtool offers a easy way to bring up images on your system in a kvm environment. Before we use uvtool to get snappy on your local environment, you’ll need install the special version that has snappy supported added to it:<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ sudo apt-add-repository ppa:snappy-dev/tools<br />$ sudo apt-get update<span style="background-color: #cccccc;"></span><br />$ sudo apt-get install uvtool<br />$ newgrp libvirtd</span></span><br />
<br />
You only need to do '<span style="font-family: "Courier New",Courier,monospace;">newgrp libvirtd</span>' during the initial setup, and only if you were not already in the libvirtd group which you can check by running the 'groups' command. A reboot or logout would have the same effect.<br />
<br />
uvtool uses ssh key authorization so that you can connect to your instances without being prompted for a password. If you do not have a ssh key in '<span style="font-family: "Courier New",Courier,monospace;">~/.ssh/id_rsa.pub</span>', you can create one now with:<br />
<span style="background-color: #eeeeee;"><br /><span style="font-family: "Courier New",Courier,monospace;">$ ssh-keygen</span></span><br />
<br />
We’re ready to roll. Let’s download the images:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="background-color: #eeeeee;">$ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel</span></span><br />
<br />
This will download a pre-made cloud image of the latest Snappy Core image from <a href="http://cloud-images.ubuntu.com/snappy/">http://cloud-images.ubuntu.com/snappy/</a>. It will download about 110M, so be prepared to wait a little bit.<br />
<br />
Now let’s start up an instance called '<span style="font-family: "Courier New",Courier,monospace;">snappy-test</span>':<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ uvt-kvm create --wait snappy-test flavor=core</span></span><br />
<br />
This will do the magic of setting up a libvirt domain, starting it and waiting for it to boot (via the --wait flag). Time to ssh into it:<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ uvt-kvm ssh snappy-test</span></span><br />
<br />
You now have a Snappy image which you’re sshd into.<br />
<br />
If you want to manually ssh, or test that your snappy install of xkcd-webserver worked, you can get the IP address of the system with:<br />
<br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">$ uvt-kvm ip snappy-test</span></span><br />
<span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">192.168.122.136</span></span><br />
<br />
When you're done playing, just destroy the instance with:<br />
<span style="font-family: "Courier New",Courier,monospace;"><span style="background-color: #eeeeee;">$ uvt-kvm destroy snappy-test</span></span><br />
<br />
Have fun!Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-13273500804387857632014-08-14T08:01:00.000-07:002014-08-14T08:01:00.253-07:00mount-image-callback: easily modify the content of a disk image fileLots of times when dealing with virtualization or cloud you find yourself working with "images". Operating on these images can be difficult. As an example, often times you may have a disk image, and need to change a single file inside it.<br /><br />There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).<br /><br />Heres how you can use it interactively:<br /><br /><span style="font-family: "Courier New",Courier,monospace;"> $ mount-image-callback disk1.img -- chroot _MOUNTPOINT_ <br /> % echo "I'm chrooted inside the image here"<br /> % echo "192.168.1.1 www.example.com" >> /etc/hosts<br /> % exit 0</span><br /><br />or non-interactively:<br /><br /> <span style="font-family: "Courier New",Courier,monospace;">mount-image-callback disk1.img -- \</span><br />
<span style="font-family: "Courier New",Courier,monospace;"> sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'</span><br /><br />or one of my typical use cases, to add a package to an image.<br />
<br /><span style="font-family: "Courier New",Courier,monospace;"> mount-image-callback --system-mounts --resolv-conf --<br /> chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit</span><br /><br />Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.<br />
<br />
If the command you provide has the literal argument '<span style="font-family: "Courier New",Courier,monospace;">_MOUNTPOINT_</span>' then it will substitute the path to the mount. It also makes that path available in the environment variable <span style="font-family: "Courier New",Courier,monospace;">MOUNTPOINT</span>. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.<br /><br />mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.<br /><br /><b>Why not libguestfs?</b><br /><br />libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.<br /><br />Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.<br /><br />In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com2tag:blogger.com,1999:blog-8612900155488851021.post-54419997443548495222013-08-14T10:50:00.000-07:002013-08-14T12:28:12.006-07:00lxc with fast cloning via overlayfs and userdata via clone hooks<span class="file-name js-selectable-text"><a href="http://s3hh.wordpress.com/" target="_blank">Serge Hallyn</a> and <a href="http://www.stgraber.org/" target="_blank">Stéphane Graber</a> made some really nice improvements to LXC in the last few months. These include:</span><br />
<ul>
<li><span class="file-name js-selectable-text"><a href="http://s3hh.wordpress.com/2012/05/10/user-namespaces-available-to-play/" target="_blank">user namespaces</a>, which will bring us secure containers in 14.04 and the ability to safely run containers without root.</span> </li>
<li>a library with bindings for C, go and python3.</li>
<li>cloning with overlayfs</li>
<li>hooks executed at clone time.</li>
</ul>
<span class="file-name js-selectable-text">I had previously worked with <a href="http://blog.utlemming.org/" target="_blank">Ben Howard</a> on the 'ubuntu cloud' template, and I just finished some updates to it that take advantage of overlayfs and clone hooks to provide a great environment to use or test cloud-init.<br /><br />Previously the ubuntu cloud template (which downloads a cloud image to create a container) allowed the user to specify userdata or public keys at container creation time. The change was really just to move the container customization code to a clone hook.<br /><br />Thanks to the daily build ppa, you can do this on any release from 12.04 to 13.10.<br /><br />Hopefully the example below explains this better. The times reported are from my Thinkpad X120e, which is a netbook class cpu and slow disk. Times clearly will vary, and these are not meant to be scientific results.</span>
If you do not see the embedded file below, please read the remainder of this post in the <A HREF="https://gist.github.com/smoser/6199772#file-lxc-clone-readme-sh">gist on github</A>.
<script src="https://gist.github.com/smoser/6199772.js?file=lxc-clone-readme.sh"></script>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-44158561303075515282013-07-23T14:13:00.000-07:002013-08-10T03:10:21.271-07:00Using Ubuntu cloud images on VirtualBoxA few months ago, I wrote an article on "<a href="http://ubuntu-smoser.blogspot.com/2013/02/using-ubuntu-cloud-images-without-cloud.html">Using Ubuntu cloud-images without a cloud</a>". That article showed how to do this with kvm. Virtualbox is another virtualiztion technology that is especially popular on Mac and Windows. So, I figured I'd give the same basic article a try, but using virtualbox rather than kvm.<br />
<br />
I used <span style="font-family: "Courier New",Courier,monospace;">'virtualboxmanage'</span> and <span style="font-family: "Courier New",Courier,monospace;">'virtualbox</span>' here, but I'm sure the same can be accomplished via the virtuabox gui, and probably similar commands to do the same thing on Mac/OSX or windows.<br />
<br />
So, below is roughly the same thing as the kvm post but with virtualbox. Also to verify function, I did this on Ubuntu 12.04 host with Ubuntu 12.04 guest, but later versions should also work.<br />
<br />
<script src="https://gist.github.com/smoser/6066204.js"></script>
Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com0tag:blogger.com,1999:blog-8612900155488851021.post-31545916500874613672013-02-11T10:57:00.001-08:002013-07-23T14:14:45.199-07:00Using Ubuntu cloud-images without a cloudSince sometime in early 2009, we've put effort into building the <a href="http://cloud-images.ubuntu.com/">Ubuntu cloud images</a> and making them useful as "cloud images". From the beginning, they supported use as an instance on a cloud platform. Initially that was limited to EC2 and Eucalyptus, but over time, we've extended the "Data Sources" that the images support.<br />
<br />
A "Data Source" to cloud-init provides 2 essential bits of information that turn a generic cloud-image into a cloud instance that is actually usable to its creator. Those are:
<br />
<ul>
<li>public ssh key</li>
<li>user-data</li>
</ul>
Without these, the cloud image cannot even be logged into.<br />
<br />
Very early on it felt like we should have a way to use these images outside of a cloud. They were essentially ready-to-use installations of Ubuntu Server that allow you to bypass installation. In 11.04 we added the OVF as a data source and a tool in cloud-init's source tree for creating a OVF ISO Transport that cloud-init would read data from. It wasn't until 12.04 that we improved the "NoCloud" data source to make this even easier.<br />
<br />
Available in <a href="http://launchpad.net/cloud-utils">cloud-utils</a>, and packaged in Ubuntu 12.10 is a utility named 'cloud-localds'. This makes it trivial to create a "local datasource" that the cloud-images will then use to get the ssh key and/or user-data described above.<br />
<br />
<script src="https://gist.github.com/smoser/4756561.js"></script>
After boot, you should see a login prompt that you can log into with '<span style="font-family: "Courier New",Courier,monospace;">ubuntu'</span> and '<span style="font-family: "Courier New",Courier,monospace;">passw0rd</span>' as specified by the user-data provided.<br />
<br />
Some notes about the above: <br />
<ul>
<li> None of the commands other than 'apt-get install' require root.</li>
<li> The 2 qemu-img commands are not strictly necessary. </li>
<ul>
<li>The 'convert' converts the compressed qcow2 disk image as downloaded to an uncompressed version. If you don't do this the image will still boot, but reads will go decompression.</li>
<li>The 'create' creates a new qcow2 delta image backed by 'disk1.img.orig'. It is not necessary, but useful to keep the '.orig' file pristine. All writes in the kvm instance will go to the disk.img file.</li>
</ul>
<li>libvirt, different kvm networking or disk could have been used. The kvm command above is just the simplest for demonstration. (I'm a big fan of the '-curses' option to kvm.)</li>
<li>In the kvm command above, you'll need to hit 'ctrl-alt-3' to see kernel boot messages and boot progress. That is because the cloud images by default send console output to the first serial device, that a cloud provider is likely to log.</li>
<li>There is <b>no default password</b> in the Ubuntu images. The password was set by the user-data provided.</li>
</ul>
The content of 'my-user-data' can actually be anything that cloud-init supports as user-data. So any custom user-data you have can be used (or developed) in this way.<br />
<br />
<ul>
</ul>
Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com22tag:blogger.com,1999:blog-8612900155488851021.post-9730322784082512752012-01-04T17:29:00.000-08:002012-01-04T17:36:13.231-08:00$40 mp3 player with wifi, speaker, linux support (palm pixi plus)<p>For almost 4 years I have used a mp3 player from Creative Labs named <a href="http://support.creative.com/Products/ProductDetails.aspx?catID=213&CatName=MP3+Players&subCatID=213&subCatName=ZEN&prodID=17437&prodName=ZEN+Stone+Plus+with+built-in+speaker">"ZEN Stone Plus with built-in speaker"</a>. It was used it to play music for our toddler all night long, nearly every night.</P><p>It was an absolutely wonderful product. Battery life via rechargeable battery was excellent, it was small enough to forget about in your pocket, and the speaker was sufficient for what we needed</P><p>The only deficiency in my opinion was that it could not play music while plugged into AC power. That meant that a.) the battery would have to last the 10+ hours every night and b.) we had to plug it into AC power every morning to charge it. In the end, it was the latter that caused me to need to replace it. After probably 1000+ plug-in and remove (many by a child under 5 years old) the pins in the USB connection eventually failed.</P><p>If you're looking for a portable MP3 player with a built-in speaker, your options are very few. The Ipod Touch (and Iphone) are perfectly good solutions, but come a hefty price tag even used. So, I had to get creative.</P><p>I'd seen the Palm Pixi appear on a couple "deal-a-day" websites at the $30 price range, and that lead me to consider the Palm Pixi Plus. The Pixi Plus has the following features that made it look *very* tempting:<br />
<ul><li>8GB storage</li>
<li>Internal Speaker</li>
<li>wifi</li>
<li>web browser</li>
<li>GPS</li>
<li>Camera</li>
<li>Touch screen and physical keyboard</li>
<li>can play music while charging</li>
<li>runs linux! (just geeky)</li>
<li>open development platform</li>
</UL></P><p>However, I googling left me unsure about a couple things:<br />
<ul><li>Could I use the pixi's wifi without first activating it with some carrier? I had no interest in a phone, or a monthly bill. It did seem that worst case, I could activate on a pay-as-you-go provider and get going for probably less than $10 total.</li>
<li>If I could manage to avoid activation, could I still have access to the Palm WebOS Store? Having an "App Store" was an unnecessary upgrade from the zen stone we had before, but sure would be nice.</li>
</ul></P><p>In the end, the answer to each of the above questions was 'Yes'. I now have a MP3 player with a builtin speaker, that also has all the functions generally associated with a smart phone (see above). I'm <i>really</i> happy with it. It cost me <a href="http://www.amazon.com/Palm-Pixi-Plus-Verizon-Screen/dp/B004IPAC10">$40 shipped in 2 days</a> (I do have amazon prime, so total cost may be a little bit more). The only two issues at the moment are:<br />
<ul><li>Getting it plugged in for charge is a bit of a pain, and it seems to me that likely the thing that will end up failing is that connection, because its not 100% trivial (and remember, toddlers/young kids are using it). There is, however, a $15 solution for that. I've not purchased it yet, so I'm not 100% certain, but it looks like I can get a <a href="http://www.amazon.com/Palm-Touchstone-Charging-Dock-Pixi/dp/B002CMEIWK/ref=pd_sbs_misc_1">Touchstone Palm Pixi Charging Dock</a> and <a href="http://www.amazon.com/Palm-3452WW-Pixi-Touchstone-Cover/dp/B002UHKPGU/ref=sr_1_1?s=wireless&ie=UTF8&qid=1325726703&sr=1-1">Palm Pixi Touchstone Cover</a> shipped for under $10.</li>
<li>The music player stops somewhere after 2 hours or so when its on repeat of a single song. Obviously this is not an issue for many people, but it affects my use case.</li>
</ul><br />
<p>In Summary, if you think an 8GB mp3 player with GPS, speaker, camera would be a generally cool toy for $40, then buy one today. I'm really happy with this one. I'll try to write another article soon that describes how to use Meta Doctor to flash with activation disabled and then run the first use program to set up a profile.<br />
</P>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com41tag:blogger.com,1999:blog-8612900155488851021.post-40825139559220218912011-11-17T10:27:00.000-08:002011-11-17T10:27:18.449-08:00Chuck sightings<div class="separator" style="clear: both; text-align: center;"><b>Chuck loves the Red Wings!</b><br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbG2aOpxvcLgSnWzp3y2fLk8QOYBzBbzT4XAknzqbbT_RkIfcmWvqWDVI11AcCG2COW18kE5ECP-Mo6bM6LVtyaNV_PIdf2z4lYrVKUp7yTRywKLU_Z9wa03zDFzaQRhBUKh1TdL-d9kE/s1600/chuckStanleyCup2.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbG2aOpxvcLgSnWzp3y2fLk8QOYBzBbzT4XAknzqbbT_RkIfcmWvqWDVI11AcCG2COW18kE5ECP-Mo6bM6LVtyaNV_PIdf2z4lYrVKUp7yTRywKLU_Z9wa03zDFzaQRhBUKh1TdL-d9kE/s320/chuckStanleyCup2.jpg" /></a></div><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNzytG1C1FaBXVtauAz01q_75l2gj0x3nqgLQzgXZ_y_5oPZelapX_OJxeq44V6NnjxY-ryNCm1v1HtgkKA-zH-kQvOiZLXGIorkelOQEDag5OA35VKw6Llx91HLE6SaHq3i4EZE-U0sI/s1600/chuckRedWingsBench.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNzytG1C1FaBXVtauAz01q_75l2gj0x3nqgLQzgXZ_y_5oPZelapX_OJxeq44V6NnjxY-ryNCm1v1HtgkKA-zH-kQvOiZLXGIorkelOQEDag5OA35VKw6Llx91HLE6SaHq3i4EZE-U0sI/s320/chuckRedWingsBench.jpg" /></a></div><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><b>Chuck has a 4 year old</b><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrCjJLEcr2IRg5iieNetakK74MVHNK4BCdu9UvwKzxoqPQ1Eh5JGYRDy4R7efZHS5R32lmVNqNFYef0T7glfnmT5CaECUK1Hc2CXN2Zvj0SnGd_A14_RDCM16RmZ8tzo-hgvvN1heZikA/s1600/chuckSesameStreet.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="128" width="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrCjJLEcr2IRg5iieNetakK74MVHNK4BCdu9UvwKzxoqPQ1Eh5JGYRDy4R7efZHS5R32lmVNqNFYef0T7glfnmT5CaECUK1Hc2CXN2Zvj0SnGd_A14_RDCM16RmZ8tzo-hgvvN1heZikA/s320/chuckSesameStreet.jpg" /></a></div>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-17207275753627445452011-08-29T09:49:00.000-07:002011-08-29T09:49:17.605-07:00RFC: anyone using vmdk or ovf files from cloud-images?Friday I posted the following on <a href="https://groups.google.com/group/ec2ubuntu/browse_frm/thread/69d7a94e32642b1b">ec2ubuntu</a> and <a href="https://lists.ubuntu.com/archives/ubuntu-cloud/2011-August/thread.html#655">ubuntu-cloud</a> mailing lists. I'm posting here in an effort to reach a larger audience.<br />
<br />
If you are using the OVF files or VMDK files, please respond.<br />
<br />
<pre>Date: Fri, 26 Aug 2011 16:34:41
From: Scott Moser
Subject: Anyone using the .ovf and/or .vmdk files on cloud-images?
Hey all,
Is anyone using the .vmdk or .ovf files on
http://cloud-images.ubuntu.com [1] ?
In the 11.04 cycle, I started building .ovf files with corresponding
.vmdk images. The goal of this was to make Ubuntu available for use via
software that supported importing OVF.
I chose to to create the disk as VMDK formated images rather than a
more open-source friendly format such as qcow. The OVF file and
associated disk image is consumable by both vmware tools and by
VirtualBox. There are no OVF consumers that I'm aware of that would
function with a qcow (or even 'raw') disk image. I feel compelled to also
mention that this format of vmdk (compressed) is not supported by qemu-img
or kvm.
Since having an OVF that could not be used by any software is not
significantly more useful than not having an OVF, we went with vmdk.
Largely prompted by the interest in providing a download format that is
ideal for OpenStack users [2], I am re-considering my decision. I would
like to avoid adding another full disk image format to the output of the
build scripts. As a result I'm thinking about replacing the .vmdk images
with compressed qcow images and updating the OVF files to reference those.
The reason behind not wanting to just add yet another deliverable is
that more downloads are confusing, and disk space is not necessarily free.
If I can drop a deliverable not used by anyone, then I'd like to do that.
So, would anyone object to the removal of .vmdk files from
cloud-images? Is anyone using these that could not just as easily use
qcow2 formated images?
Thanks,
Scott
--
[1] http://cloud-images.ubuntu.com/server/oneiric/current/
[2] https://bugs.launchpad.net/ubuntu/+bug/833265
</pre>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-52292762315909796252011-08-09T08:46:00.000-07:002011-08-09T17:24:04.390-07:00Amazon issues with EBS affect Ubuntu images in the EU-WEST region<p><i>Note: This blog post has been updated in-place.<br />
<br />
We have received information from Amazon that the EBS snapshots for our released 10.04 images from 20110719 were <b>not affected</B> (ami-5c417128 and ami-52417126). <strike>It seems that an api issue incorrectly marked them as such</strike>. It was an error in our logic that associated snapshot-ids with amis that gave us the incorrect output. The only Ubuntu images that were affected were old daily builds and milestone releases. If you are interested in reading the original message, please do so on the <a href="https://lists.ubuntu.com/archives/ubuntu-cloud-announce/2011-August/thread.html#0">Ubuntu cloud-announce mailing list archives</a>.<br />
</i><br />
</p><p>We received this morning an automated email[1] from Amazon informing us of possible loss of data in EBS snapshots on the EU-WEST-1 region. Our engineering team immediately started an assessment of the damages this might have caused to the EBS images that we publish for our users. We are working with Amazon to re-mediate customer impact and prevent any future outages.</P><p>A number of non-current daily build and old alpha or beta images have been affected, but we hope that no one would have been using these images for production use; we are not planning corrective actions for these images. You can see the full list of AMIs affected at <a HREF="http://paste.ubuntu.com/662210">http://paste.ubuntu.com/662210/</A>.</P><p>To have this type of announcements sent to your email directly, please subscribe to our ubuntu-cloud-announce mailing list at <a HREF="https://lists.ubuntu.com/mailman/listinfo/ubuntu-cloud-announce">https://lists.ubuntu.com/mailman/listinfo/ubuntu-cloud-announce.</A>.</P>Our support services are available to help customers of the Ubuntu Advantage Cloud Guest program. Details about this program can be found at <a HREF="http://www.canonical.com/enterprise-services/ubuntu-advantage/cloud">http://www.canonical.com/enterprise-services/ubuntu-advantage/cloud</A><br />
<br />
[1] Email received from Amazon on Aug 9 2011 at 9:11 UTC<br />
<br />
<i><br />
Hello,<br />
<br />
We've discovered an error in the Amazon EBS software that cleans up unused snapshots. This has affected at least one of your snapshots in the EU-West Region. <br />
<br />
During a recent run of this EBS software in the EU-West Region, one or more blocks in a number of EBS snapshots were incorrectly deleted. The root cause was a software error that caused the snapshot references to a subset of blocks to be missed during the reference counting process. This process compares the blocks scheduled for deletion to the blocks referenced in customer snapshots. As a result of the software error, the EBS snapshot management system in the EU-West Region incorrectly thought some of the blocks were no longer being used and deleted them. We've addressed the error in the EBS snapshot system to prevent it from recurring.<br />
<br />
We have now disabled all of your snapshots that contain these missing blocks. You can determine which of your snapshots were affected via the AWS Management Console or the DescribeSnapshots API call. The status for any affected snapshots will be shown as "error."<br />
<br />
We have created copies of your affected snapshots where we've replaced the missing blocks with empty blocks. You can create a new volume from these snapshot copies and run a recovery tool on it (e.g. a file system recovery tool like fsck); in some cases this may restore normal volume operation. These snapshots can be identified via the snapshot Description field which you can see on the AWS Management Console or via the DescribeSnapshots API call. The Description field contains "Recovery Snapshot snap-xxxx" where snap-xxx is the id of the affected snapshot. Alternately, if you have any older or more recent snapshots that were unaffected, you will be able to create a volume from those snapshots without error. For additional questions, you may open a case in our Support Center: https://aws.amazon.com/support/createCase<br />
<br />
We apologize for any potential impact this might have on your applications.<br />
<br />
Sincerely,<br />
AWS Developer Support<br />
<br />
This message was produced and distributed by Amazon Web Services LLC, 410 Terry Avenue North, Seattle, Washington 98109-5210<br />
</i><br />
Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-14782379346617934322011-07-29T17:22:00.000-07:002011-07-29T17:22:52.630-07:00How to find the right Ubuntu AMI with toolsA while ago, I gave instructions on how you could <a href="/2011/07/how-to-find-right-ubuntu-ami-on-ec2.html">find the right Ubuntu AMI</a>. I promised to write about how you could accomplished programmaticly.<br />
<br />
When we publish Ubuntu AMIs, we simultaneously publish machine consumable data to <a href="https://cloud-images.ubuntu.com/query/">https://cloud-images.ubuntu.com/query/</a>. The data there contains information so that you can:<br />
<ul><li>Find the latest ami of a given type (hvm/ebs/instance-store), arch, and region.</li>
<li>Download the pristine image files</li>
</ul><br />
I think the format of the data is generally discernible, but there is some more information on the <a href="https://help.ubuntu.com/community/UEC/Images#Machine%20Consumable%20UEC%20Images%20Availability%20Data">Ubuntu Wiki</a>.<br />
<br />
I've put an <a href="https://gist.github.com/1100458">example client</a> together. Here is some example usage:<br />
<ul><li><b>Launch the latest released image in us-east-1</b></li><BR>
<code>
$ euca-run-instances --instance-type t1.micro --key mykey $(ubuntu-ami)
</code>
<BR>
<li><b>Open the Amazon EC2 console to launch the latest Oneiric daily build</b></li><BR>
You can now <a href="http://aws.typepad.com/aws/2011/04/aws-management-console-bookmarking.html">directly link</a> to launching an image in Amazon EC2 console, combine that with this tool to open your browser to the right page.<BR>
<code>
$ ami=$(ubuntu-ami us-west-1 oneiric daily i386)<BR>
$ gnome-open https://console.aws.amazon.com/ec2/home?region=us-west-1#launchAmi=${ami}
</code>
<BR>
<li><b>Download and extract the latest tarball for lucid</b></li>
Here, 'pubname' is the recommended "publish name" of this AMI, which happens to correspond to the basename of the name on EC2, and "url" is a fully qualified url to http://cloud-images.ubuntu.com .<BR>
<code>
$ wget $(ubuntu-ami -f "%{url} -O %{pubname}.tar.gz")<BR>
$ <a href="http://manpages.ubuntu.com/manpages/lucid/man1/uec-publish-tarball.1.html">uec-publish-tarbal</a>l *.tar.gz my-ubuntu-images
</code>
</ul><br />
I don't think I'll get this into 11.10, but I'd like to have something with this function into 12.04, and support launching AMIs directly through it for ease of use. I'd love to hear input on what you'd like a "ubuntu-run-instance" command to look like and do.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com4tag:blogger.com,1999:blog-8612900155488851021.post-53671521461595172722011-07-25T08:36:00.000-07:002011-07-25T08:36:10.715-07:00Updated AWS tools PPA for UbuntuI thought I would post a quick entry to spread the word about a ppa I've been maintaining with up to date versions of some of the AWS tools. It is named simply awstools. You can find it <a href="https://launchpad.net/~awstools-dev/+archive/awstools">here</a>.<br />
<br />
Right now the ppa has the following packages:<br />
<ul><li><a href="http://aws.amazon.com/developertools/351">ec2-api-tools</a> : Amazon's EC2 command line tools</li>
<li><a href="http://aws.amazon.com/developertools/368">ec2-ami-tools</a> : Amazon's EC2 AMI tools (rebundling and uploading images)</li>
<li><a href="http://aws.amazon.com/developertools/AWS-Identity-and-Access-Management/4143">iamcli </a>: Identity Access Management (IAM) Command Line Toolkit </li>
<li><a href="http://aws.amazon.com/developertools/2928">rdscli</a> : Command Line Toolkit for the Amazon Relational Database Service</li>
</ul><br />
To add this repository its as easy as:<br />
<code><br />
$ sudo apt-add-repository ppa:awstools-dev/awstools<br />
$ sudo apt-get update<br />
</code><br />
<br />
Then, to install the newest available version of ec2-api-tools, do:<br />
<code><br />
$ sudo apt-get install ec2-api-tools<br />
</code><br />
<br />
I hope hope that is helpful.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com14tag:blogger.com,1999:blog-8612900155488851021.post-11225568049161755072011-07-18T07:01:00.000-07:002011-07-18T07:01:19.630-07:00How to find the right Ubuntu AMI on EC2For anyone getting started on EC2, the first obstacle they're faced with is selecting an AMI (Amazon Machine Image). If your trying to find an Ubuntu image either via the Amazon Console or via the output of ec2-describe-images, you're likely to be overwhelmed. The success of Ubuntu as a platform and Ubuntu's commitment to refreshing AMIs means that there are literally thousands of of images on Amazon EC2 with "ubuntu"in their name. That, combined with and the lack of Ubuntu on the "Quick Start" menu makes this a non-trivial task.<br />
<br />
The purpose of this post is to document how you can easily, quickly and safely find the Official Ubuntu AMIs on EC2 via the Amazon EC2 console or via your web browser.<br />
<br />
<h3>Some General Ubuntu Information</h3>You already may be aware of these items, but I want to point them out for those who are just getting started with Ubuntu or EC2.<br />
<ul><li>Ubuntu releases every 6 months. Each release has a version number and a codename. The most important thing to note here is that every 2 years a LTS (Long Term Support) release is made. If you want stability and support for 5 years, select an LTS release. If you want the newest packages, select the most recent release. See the <a href="http://en.wikipedia.org/wiki/List_of_Ubuntu_releases">wikipedia entry</a> for more information.<br />
<li>At the time of this writing, there are 5 "regions" in Amazon EC2. Each region represents a geographical location. Each region has its own AMI ids. Inside each region there are 2 architectures (x86_64, i386) and 2 "root store" types (EBS or instance). That means that for each build Ubuntu releases, we generate 20 ami ids.</li><br />
<br />
<br />
</ul><h3>Easiest: Find AMIs From Your Web Browser</h3>You can choose your interface for selecting images. Go to either: <ul><li><a href="http://cloud.ubuntu.com/ami">http://cloud.ubuntu.com/ami</a></li>
At the bottom of this page, you can select the region, release, arch or root-store. You're only shown the most recent releases here. When you've made your selection, you can copy and paste the ami number, or just click on it to go right to the EC2 console launch page for that AMI. or
<li> <a href="https://cloud-images.ubuntu.com/server/releases/">https://cloud-images.ubuntu.com/server/releases/</a></li>
<ul><li>Select Your release by number or code-name</li>
<li>Select 'release/': We keep historical builds around for debugging, but the 'release/' directory will always be the latest.</li>
<li>Select your AMI from the table and click to launch in the console or copy and paste a command line.</li>
</ul></ul><h3>Search through the Amazon EC2 Console</h3>The <a href="https://console.aws.amazon.com/ec2/home">EC2 Console</a> is a graphical way to sort through AMIs and select one to launch. To Launch an Official Ubuntu Image here, follow the steps below. <ul><li>Select the region you want in the top left, under 'Navigation'</li>
Example: "Us East (Virginia)"
<li>Click "AMIs"</li>
Do not click "Launch Instance", see note below</li>
<li>for 'Viewing', select "All Images"</li>
<li>Limit the results to Ubuntu Stable Release images by typing <b>ubuntu-images/</B></li>
You should expand the 'AMI Name' field as wide as possible (maybe shrink the others).
<li>Limit the results to a specific release by appending '.*<release>'.</li>
For example: <b>ubuntu-images/.*10.04</B>
<li>Limit the results to a given arch by appending '.*i386' or '.*amd64'</li>
Note: If you want to run a m1.small or c1.medium, you need 'i386'. If you want to run a t1.micro, you will need to select an 'ebs' image.
<li>Sort your results by AMI Name and make selection</li>
By sorting by AMI name, you can more easily see the newest AMI for a given set. Each AMI ends with a number in the format YYYYMMDD (year,month,day). You want the most recent one.
<li><b>Verify the Owner is 099720109477</b>!<br />
Any user can register an AMI under any name. Nothing prevents a malicious user from registering an AMI that would match the search above. So, in order to be safe, you need to verify that the owner of the ami is '099720109477'.<br />
If "Owner" is not a column for you, click "Show/Hide" at the top right and select "Owner" to be shown.<br />
<li>Click on the AMI name, then Click 'Launch'</li><br />
<br />
<br />
</ul>
<h3>Notes</h3><ul><li><b>HTTPS Access</b></li>
Of the options above, right now https://cloud-images.ubuntu.com/server/releases/ is the only one that provides data over https. This may be important to you if you are concerned about potential "Man in the Middle" attacks when finding a AMI id. I've requested <a href="http://foss-boss.blogspot.com">Ahmed</a> [kim0 in irc] to support https access to https://cloud.ubuntu.com/ami .
<li><b>Web Console 'Launch Instance' dialog</B></li>I saw no way in the 'Launch Instance' dialog to see the Owner ID. Because if this, I suggest not using that dialog to find "Community AMIs". There is simply no way you can reliably know who the owner of the image is from within the console. For advanced users, I will blog sometime soon on a way to find AMIs programmatically [<a href="https://cloud-images.ubuntu.com/query">Hint</a>].
</ul>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com2tag:blogger.com,1999:blog-8612900155488851021.post-64479322969736186972011-07-15T14:04:00.000-07:002011-07-15T14:04:38.616-07:00Getting a larger root volume on a cluster compute instanceAt this point, all of the Ubuntu EBS root images are 8GB. This is nice and small. I previously covered how you could increase the size of those disks.<br />
<br />
On Cluster Compute instances, its a bit more difficult. The cluster compute instances have their root filesystem in a partition inside of the disk attached. Thats all well and good, and most likely a partitioned disk is more familiar to you than one that is not partitioned. The cluster compute images have grub2 installed in the MBR of that disk.<br />
<br />
The problem with the partition on the disk is that you can no longer simply launch the instance with a larger root volume and then 'resize2fs /dev/sda1'. This is because the kernel won't re-read the partition table of a disk until all of its partitions are unmounted. For the disk that holds your root partition, that means you basically have to reboot after a change.<br />
<br />
To avoid that waste of precious time, we've included a utility called 'growpart' inside of the initramfs on the Ubuntu images. It is invoked by the 'cloud-initramfs-growroot' package. This code runs before the root fileystem is busy, so the request for the kernel to re-read the partition table will work without requiring a reboot. To try it out, do:<br />
<br />
<code><br />
# us-east-1 ami-1cad5275 hvm/ubuntu-natty-11.04-amd64-server-20110426<br />
$ ec2-run-instances --region us-east-1 --instance-type cc1.4xlarge \<br />
--block-device-mapping /dev/sda1=:20 ami-1cad5275<br />
</code><br />
<br />
When you get to the instance, you'll have a 20G filesystem on /. And, if you're interested enough to look in the console output, you'll see something like:<br />
<code><br />
GROWROOT: CHANGED: partition=1 start=16065 old: size=16755795 end=16771860 new: size=41913585,end=41929650<br />
</code>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-81070227125933470532011-03-02T13:43:00.000-08:002011-03-02T16:10:40.827-08:00start a 11.04 instance with a larger root filesystemIn order to fit inside the Amazon "Free Tier", Ubuntu made the decision to change its root volume size from 15G to 8G. That decision was made for all refreshed EBS root images. So, our current set of 10.04, 10.10 and 11.04 images have 8G root filesystems.<br />
<p>If you find that space somewhat limiting, it is easy to give yourself a larger root volume at instance creation time. Its easy</p><br />
<h3>Launch the instance with appropriate block-device-mapping arguments</h3><code><br />
$ ec2-run-instances $AMI --key mykey --block-device-mapping /dev/sda1=:20<br />
</code><br />
<p>That will create you an instance with a 20G root volume. <i>However</i> the filesystem on that volume will still only occupy 8G of the space. Essentially, you'd have 12G of unused volume at the end of the disk.</p><br />
<h3>Resize the root volume</h3><p>if you've launched an 11.04 based image newer than alpha-2, <b>this step is not necessary</b>. Cloud-init will do it for you. It is just assumed that you want your root filesystem to fill all space on its partition. I honestly cannot think of a reason why you would not want that.</p><p>Now, if you are using 10.04 or 10.10 images, you can resize your root volume easily enough after the boot. Just login, and issue:</p><code><br />
$ sudo resize2fs /dev/sda1<br />
</code><br />
That operation should take only a few seconds or less, and you'll then have all the space you need.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com5tag:blogger.com,1999:blog-8612900155488851021.post-90128434276285837542011-02-03T18:59:00.000-08:002011-06-28T07:14:57.132-07:00Migrating to pv-grub kernels for kernel upgrades<p>After the release of the Ubuntu 10.04 LTS images that use the pv-grub kernel, there were some questions on what to do if you're running from an older AMI and want to take advantage of what pv-grub offers.</p><p>Heres a short list of how you might be affected:</p><ul><li>If you are running an EBS root instance launched from or rebundled from an Official Ubuntu image of 10.04 LTS (Lucid Lynx) released 20110201.1 or later, 10.10 (Maverick Meerkat), Natty Narwhal, or later, then you do not need to do anything. You already are using pv-grub, and can simply apply software updates and reboot to get new kernel updates. You can stop reading now.</li>
<li>If you are running an instance-store instance that is not using a pv-grub kernel, there is nothing you can do. There is simply no way to change the kernel of an instance store instance.</li>
<li>If you are running an EBS-root based instance rebundled from a Ubuntu 9.10 (Karmic Koala) or older, then there is currently no supported path to getting kernel upgrades. There were no officially released EBS-root based Ubuntu images of 9.10, and with Karmic's end of life coming in April, there is not likely to be support for this new feature.</li>
<li>If you are running an EBS-root instance launched from or rebundled from an official Ubuntu release of 10.04, read on.</li>
</ul><br />
<p>Updating a 10.04 based image basically entails 2 steps, setting up /boot/grub/menu.lst, and then modifying your instance to have a pv-grub kernel.</p><br />
<h3>Step 1: installing grub-legacy-ec2.</h3><p>If you launched or rebundled your instance from an Ubuntu 10.04 numbered <a href="http://uec-images.ubuntu.com/releases/10.04/release-20101020/">20101020</a> or earlier, you need to do this step. If you started from a release of <a href="http://uec-images.ubuntu.com/releases/10.04/release-20101020/">20101228</a> you can skip this step.</p><ul><li>Apply software updates.<br />
<br />
Depending on how out of date you are, this might take a while.<br />
<br />
<code><br />
sudo apt-get update && sudo apt-get dist-upgrade<br />
</code><br />
</li>
<li>Install grub-legacy-ec2<br />
<br />
<p>The 'grub-legacy-ec2' package is what the images use to manage /boot/grub/menu.lst. If you had used Ubuntu prior to the default selection of grub2, you will be familiar with how it works. grub-legacy-ec2 is basically just the menu.lst managing portion of the Ubuntu grub 0.97 package with some EC2 specifics thrown in.</p><p>To get a functional /boot/grub/menu.lst, all you have to do is:</p><code><br />
sudo apt-get install grub-legacy-ec2<br />
</code><br />
</li>
</ul><br />
<h3>Step 2: modifying the instance to use pv-grub kernels</h3>Now, your images should have a functional /boot/grub/menu.lst, and grub-legacy-ec2 should be properly installed such that future kernels will get automatically added and selected on reboot. However, you have to change your instance to boot using pv-grub rather than the old kernel aki that you originally started with.<br />
<ul><li>Shut down the instance<br />
<br />
The best way to do this is probably to just issue '/sbin/poweroff' inside the instance. Alternatively, you could use the ec2 api tools, or do so from the AWS console.<br />
<code><br />
% sudo /sbin/poweroff<br />
</code><br />
</li>
<li>Modify the instance's kernel to be a pv-grub kernel<br />
<br />
Once the instance is moved to "stopped" state, you can modify its kernel to be a pv-grub kernel. The kernel you select depends on the arch and region. See the table below for selecting which you should use:<br />
<table><tr><th>region</th><th>arch</th><th>aki id</th></tr>
<tr><td>ap-southeast-1</td><td>x86_64</td><td>aki-11d5aa43</td></tr>
<tr><td>ap-southeast-1</td><td>i386</td><td>aki-13d5aa41</td></tr>
<tr><td>eu-west-1</td><td>x86_64</td><td>aki-4feec43b</td></tr>
<tr><td>eu-west-1</td><td>i386</td><td>aki-4deec439</td></tr>
<tr><td>us-east-1</td><td>x86_64</td><td>aki-427d952b</td></tr>
<tr><td>us-east-1</td><td>i386</td><td>aki-407d9529</td></tr>
<tr><td>us-west-1</td><td>x86_64</td><td>aki-9ba0f1de</td></tr>
<tr><td>us-west-1</td><td>i386</td><td>aki-99a0f1dc</td></tr>
</table><br />
Then, assuming you have $AKI represents the appropriate aki above, and $IID represents your instance id, and $REGION represents your region, you can update the instance and then start it with:<br />
<code><br />
$ ec2-modify-instance-attribute --region ${REGION} --kernel ${AKI} ${IID} <br />
$ ec2-start-instances --region ${REGION} ${IID}<br />
</code><br />
</li>
</ul><br />
<p>Your instance will start with a new hostname/IP address, so get that out of describe-instances and ssh to your instance. You can check that it has worked by looking at /proc/cmdline. Your kernel command line should look something like this:</p><code><br />
$ cat /proc/cmdline <br />
root=UUID=7233f657-c156-48fe-8d60-31ae6400d0cf ro console=hvc0 <br />
</code><br />
<br />
<p>In the future, your instance will now behave much more like a "normal server". If you apply software updates (apt-get dist-upgrade) and reboot, you'll boot into a fresh new kernel.</p>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com12tag:blogger.com,1999:blog-8612900155488851021.post-91254659722522750272011-02-03T11:34:00.000-08:002011-02-03T11:34:29.321-08:00Getting ephemeral devices on EBS images<p>When you launch an instance in EC2, be it EBS root or instance-store, you are "entitled" to some amount of ephemeral storage. Ephemeral storage is basically just extra local disk that lives in the host machine where your instance will run. How much instance-store disk you are entitled to is described with the <a HREF="http://aws.amazon.com/ec2/instance-types/">instance size descriptions</a>.</p><p>The described amount of ephemeral storage is allocated to all instance-store instances. However, at register time of EBS-root instances the default "block-device-mapping" is set. Because an EBS-root AMI may be run with different instance-types, it is impossible to set the correct mapping for all scenarios. For the Ubuntu images, the default mapping only includes ephemeral0, which is present in all but t1.micro sizes.</p><p>So, if you want to "get all that you're paying for", you'll have to launch instances with different '--block-device-mapping' arguments to ec2-run-instances (or euca-run-instances). I've written a little utility named <a href="https://gist.github.com/809587">bdmapping</a> to make that easier, and hold the knowledge.</p><p>You can use it to show the mappings for a given instance type:</p><code><br />
$ ./bdmapping<br />
--block-device-mapping=sdb=ephemeral0 --block-device-mapping=sdc=ephemeral1<br />
</code><br />
<br />
Or, use it with command substitution when you launch an instance:<br />
<code><br />
$ ec2-run-instances $(bdmapping -c m1.xlarge) --key mykey ami-c692ec94<br />
#<br />
# the above is the same as running:<br />
# ec2-run-instances --block-device-mapping=sdb=ephemeral0 \<br />
# --block-device-mapping=sdc=ephemeral1 --instance-type=m1.large \<br />
# --key mykey ami-c692ec94<br />
</code><br />
<br />
<p>You can view or download 'bdmapping' at <a href="https://gist.github.com/809587">https://gist.github.com/809587</a></p><br />
<!---
<code><br />
#!/bin/sh<br />
<br />
Usage() {<br />
cat <<EOF<br />
${0##*/} [-c] instance-type<br />
<br />
print ephemeral store block-device-mapping arguments for 'instance-type'<br />
If '-c' given, also print "--instance-type <instance-type>"<br />
<br />
examples:<br />
* ${0##*/} m1.large<br />
--block-device-mapping=sdb=ephemeral0 --block-device-mapping=sdc=ephemeral1<br />
* ec2-run-instances \$(${0##*/} -c m1.xlarge) --key mykey ami-c692ec94<br />
is the same as running:<br />
ec2-run-instances --block-device-mapping=sdb=ephemeral0 \\<br />
--block-device-mapping=sdc=ephemeral1 --instance-type=m1.large \\<br />
--key mykey ami-c692ec94<br />
EOF<br />
}<br />
<br />
[ "$1" = "-h" -o "$1" = "--help" ] && { Usage; exit 0; }<br />
print_type=0<br />
[ "$1" = "-c" ] && { print_type=1; shift; }<br />
[ $# -eq 1 ] || { Usage 1>&2; exit 1; }<br />
itype=${1}<br />
<br />
# data cleaned from from http://aws.amazon.com/ec2/instance-types/<br />
# t1.micro NONE # m2.2xlarge 850 # c1.xlarge 1690<br />
# m1.small 160 # m1.large 850 # m1.xlarge 1690<br />
# c1.medium 350 # cc1.4xlarge 1690 # cc1.4xlarge 1690<br />
# m2.xlarge 420 # m2.4xlarge 1690 # cg1.4xlarge 1690<br />
bdmaps=""<br />
ba="--block-device-mapping="<br />
case "${itype}" in<br />
t1.micro) bdmaps="";; # there is no ephemeral store on t1.micro<br />
m1.small|c1.medium)<br />
bdmaps="";; # the first on i386 always attached. sda2=ephemeral0<br />
m2.xlarge) bdmaps="";; # one 420 for m2.xlarge<br />
m1.large|m2.2xlarge|cg1.*|cc1.*)<br />
bdmaps="${ba}sdb=ephemeral0 ${ba}sdc=ephemeral1";;<br />
m1.xlarge|m2.4xlarge|c1.xlarge)<br />
bdmaps="${ba}sdb=ephemeral0 ${ba}sdc=ephemeral1"<br />
bdmaps="${bdmaps} ${ba}sdd=ephemeral2 ${ba}sde=ephemeral3";;<br />
*) echo "unknown instance type $itype" 1>&2; exit 1;;<br />
esac<br />
[ ${print_type} -eq 0 ] && echo "${bdmaps}" ||<br />
echo "${bdmaps} --instance-type=${itype}"<br />
exit 0<br />
# vi: ts=4 noexpandtab<br />
</code><br />
-->Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com4tag:blogger.com,1999:blog-8612900155488851021.post-29226385157522819342011-02-03T08:03:00.000-08:002011-02-03T08:08:23.217-08:00New Ubuntu 10.04 LTS images with pv-grub support<p>Yesterday I <a href="https://lists.ubuntu.com/archives/ubuntu-cloud/2011-February/000517.html"> announced</a> the availability of updated Ubuntu images on EC2 for 10.04 LTS (Lucid Lynx).</p><p>This post is mainly to mirror that one to a possibly wider audience, but also to go into more detail on the key change in these images:</p><blockquote><b>You can now upgrade the kernel inside your 10.04 based EC2 image by running 'apt-get update && apt-get dist-upgrade && reboot!</b></blockquote><p>Now, for a little more detail on that.</p><p>Around the time that 10.04 released, or shortly there after, Amazon released a new feature of EC2 as <a href="http://aws.typepad.com/aws/2010/07/use-your-own-kernel-with-amazon-ec2.html">"Use Your Own Kernel with Amazon EC2"</a>. What this really meant was that there were now "kernels" on EC2 that were really versions of grub. By selecting this kernel, and providing a /boot/grub/menu.lst inside your image, the instance would be in control of the kernel booted. Previously, the loading of a kernel was done by the xen hypervisor, and could not be changed at all for a instance-store image, and only non-trivially for an EBS-root instance</p><p>Yes, you read that right, midway through the year of 2010 you were able to change the kernel that you were running in an EC2 instance with a software update and a reboot.</p><p>We took advantage of this support in our 10.10 images, but for many people, only LTS (Long Term Support) releases are interesting. So, to satisfy those people, we've brought the function into our 10.04 images.</p><p>If you're using our images, or have rebundled an image starting from them, I *strongly* suggest updating to the <a href="http://uec-images.ubuntu.com/releases/lucid/release-20110201.1/">20110201.1 images</a> or anything later. You'll want to do that because</p><ul><li>You can receive kernel upgrades as normal software upgrades to your running instance.</li>
<li>This is by far the most supportable route for upgrade from 10.04 to our next LTS (12.04). If your instance is not using the pv-grub kernel, and you want to upgrade to a newer release, you will have to upgrade, shutdown the instance, modify the associated kernel, and start up the instance. That is both more painful, and will result in larger downtime.<br />
</ul><p>So, in short, grab our <a href="http://uec-images.ubuntu.com/releases/lucid/release/">new images</a>!</p>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com5tag:blogger.com,1999:blog-8612900155488851021.post-82970632207817058242011-01-11T10:47:00.000-08:002011-01-20T10:13:19.217-08:00Failsafe and manual management of kernels on EC2On 10.10 and later, Ubuntu images use the Amazon provided pv-grub to load kernels that live inside the image. The selected kernel is controlled by /boot/grub/menu.lst. This makes it possible to install a new kernel via 'dpkg -i' or 'apt-get dist-upgrade' and then reboot into the new kernel.<br />
<br />
The file /boot/grub/menu.lst is managed by grub-legacy-ec2 package. The program 'update-grub-legacy-ec2' is called on installation of Ubuntu kernels through files that are installed in /etc/kernel/postinst.d and /etc/kernel/postrm.d.<br />
<br />
By default, as with other Ubuntu systems, the kernel with the highest revision will be the behavior will be automatically selected as the default, and selected on the next boot. Because EC2 images is read-only, you may want to manually manage your selected kernel. This can be done by modifying /boot/grub/menu.lst to use the grub "fallback" code.<br />
<br />
I'll launch an instance of the current released maverick (ami-ccf405a5 in us-east-1 ubuntu-maverick-10.10-i386-server-20101225). Then, on the instance, create hard links to the default kernel and ramdisk so even on apt removal, they'll stick around, and then change /boot/grub/menu.lst to use those kernels.<br />
<br />
<code><br />
sudo ln /boot/vmlinuz-$(uname -r) /boot/vmlinuz-failsafe<br />
sudo ln /boot/initrd.img-$(uname -r) /boot/initrd.img-failsafe<br />
</code><br />
<br />
Then, copy the existing entry in /boot/grub/menu.lst to a new entry above the automatic section. I've changed/added:<br />
<br />
<code><br />
# You can specify 'saved' instead of a number. In this case, the default entry<br />
# is the entry saved with the command 'savedefault'.<br />
# WARNING: If you are using dmraid do not use 'savedefault' or your<br />
# array will desync and will not let you boot your system.<br />
default saved<br />
<br />
...<snip>...<br />
<br />
# Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST<br />
<br />
# this is the failsafe kernel, it will be '0' as it is the first<br />
# entry in this file<br />
title Failsafe kernel<br />
root (hd0)<br />
kernel /boot/vmlinuz-failsafe root=LABEL=uec-rootfs ro console=hvc0 FAILSAFE<br />
initrd /boot/initrd.img-failsafe<br />
savedefault<br />
<br />
title Ubuntu 10.10, kernel 2.6.35-24-virtual<br />
root (hd0)<br />
kernel /boot/vmlinuz-2.6.35-24-virtual root=LABEL=uec-rootfs ro console=hvc0 TEST-KERNEL<br />
initrd /boot/initrd.img-2.6.35-24-virtual<br />
savedefault 0<br />
</code><br />
<br />
And then update grub to store that the first kernel is the 'saved', which for grub 1 (or 0.97) modifies /boot/grub/default.<br />
<br />
<code><br />
sudo grub-set-default 0<br />
sudo reboot<br />
</code><br />
<br />
Now, a reboot will boot into the failsafe kernel (which we can verify by checking /proc/cmdline) and see 'FAILSAFE'. Then, to test our "TEST-KERNEL", run:<br />
<br />
<code><br />
sudo grub-set-default 1<br />
sudo reboot<br />
</code><br />
<br />
After this reboot, the system come up into "TEST-KERNEL" (per /proc/cmdline) but /boot/grub/default will contain '0', indicating that on subsequent boot, the FAILSAFE will run. In this way, if your kernel failed to boot all the way up, you can then just issue:<br />
<br />
<code><br />
euca-reboot-instances i-15b77779<br />
</code><br />
<br />
And you'll boot back into the FAILSAFE kernel.<br />
<br />
The above basically allows you to manually manage your kernels while letting grub-legacy-ec2 still write entries to /boot/grub/menu.lst.<br />
<br />
I chose to use hardlinks for the 'failsafe' kernels, so that even on dpkg removal, the files would still exist. Because the 10.10 Ubuntu kernels have the EC2 network and disk drivers built in, you'll still be able to boot even after a dpkg removal of the failsafe kernel or an errant 'rm -Rf /lib/modules/2*'Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com0tag:blogger.com,1999:blog-8612900155488851021.post-2019530220726717662011-01-07T07:25:00.000-08:002011-01-07T07:29:12.301-08:00Using euca2ools rather than ec2-api-tools with EC2The <a href="http://uec-images.ubuntu.com/">Ubuntu UEC Images</a> that Ubuntu produces on EC2 are in every way fully supported, "Official Ubuntu". As with other official releases, access to source code for security and maintenance reasons affects our decisions on what is included. <br />
<br />
In the UEC images, the most notable packages left out are '<a href="http://aws.amazon.com/developertools/351">ec2-api-tools</a>' and '<a href="http://aws.amazon.com/developertools/368">ec2-ami-tools</a>'. I personally use the ec2-api-tools and ec2-ami-tools quite frequently and Amazon has done a great job with them. However, the license and lack of source code prevents them from being in Ubuntu 'main'.<br />
<br />
Fortunately<br />
a.) There are packages made available in the Ubuntu '<a href="https://help.ubuntu.com/community/Repositories/Ubuntu">multiverse</a>' component.<br />
b.) The euca2ools package is installed by default and provides an almost drop in replacement for the ec2-api-tools and ec2-ami-tools.<br />
<br />
I think that many users of EC2 aren't aware of the euca2ools, so I'd like to give some information on how to use them here.<br />
<br />
The ec2-api-tools use the SOAP interface and thus use the "EC2_CERT" and "EC2_PRIVATE_KEY". The euca2ools sit on top of the excellent <a href="https://code.google.com/p/boto/">boto</a> project. Boto uses the AWS REST api, which means authentication is done with your "Access Key" and "Secret Key". As a result, configuration is a little different. (Note, bundling images, you still need the EC2_CERT and EC2_PRIVATE_KEY for encryption/signing).<br />
<br />
Configuration for euca2ools can be done via environment variables (EC2_URL, EC2_ACCESS_KEY, EC2_SECRET_KEY, EC2_CERT, EC2_PRIVATE_KEY, S3_URL, EUCALYPTUS_CERT) or via config file. I personally prefer the configuration file approach.<br />
<br />
Here is my ~/.eucarc that is configured to operate with the EC2 us-east-1 region.<br />
<code><br />
CRED_D=${HOME}/creds/aws-smoser<br />
EC2_REGION="${EC2_REGION:-us-east-1}"<br />
EC2_CERT=${CRED_D}/cert.pem<br />
EC2_PRIVATE_KEY=${CRED_D}/pk.pem<br />
EC2_ACCESS_KEY=ABCDEFGHIJKLMNOPQRST<br />
EC2_SECRET_KEY=UVWXYZ0123456789abcdefghijklmnopqrstuvwx<br />
EC2_USER_ID=950047163771<br />
EUCALYPTUS_CERT=/etc/ec2/amitools/cert-ec2.pem<br />
EC2_URL=https://ec2.${EC2_REGION}.amazonaws.com<br />
S3_URL=https://s3.amazonaws.com:443<br />
</code><br />
<br />
Things to note above:<br />
<ul><li>euca2ools sources the ~/.eucarc file with bash, and then reads out the values of EC2_REGION, EC2_CERT, EC2_PRIVATE_KEY, EC2_ACCESS_KEY, EC2_USER_ID, EC2_URL, S3_URL. This means that you use other bash functionality in the config file as I've done above with 'EC2_REGION'. This allows me to do something like:<br />
<code><br />
EC2_REGION=us-west-1 euca-describe-images<br />
</code><br />
<li>If there is no configuration file specified with '--config', then those values will be read from environment variables</LI><br />
<li>Amazon's public certificate from the ami tools is included with euca2ools in ubuntu, and located in /etc/ec2/amitools/cert-ec2.pem</LI><br />
<li>Many of the euca2ools commands will run significantly faster than the ec2-api-tools. The reason for slowness of the ec2-api-tools is their man java dependencies (please correct me if I'm wrong).<br />
<li>Your ~/.eucarc file contains credentials and therefore it should be protected with filesystem permissions (ie 'chmod go-r ~/.eucarc').<br />
</UL>Hopefully this will make it easier for you to use euca2ools with EC2 on Ubuntu.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com3tag:blogger.com,1999:blog-8612900155488851021.post-14399047200840102302010-12-14T12:07:00.000-08:002010-12-14T14:06:26.259-08:00Ubuntu Natty Narwhal Cluster Compute Instances<p>Some time ago, Amazon <a href="http://aws.amazon.com/hpc-applications/">announced</a> two new instance types aimed at high performance computing. The new types differ from Amazon's previous offerings in that</p><ul><li>They use <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Xen">Xen "hvm"</a> (Hardware Virtualization Mode) rather than 'paravirtualization'<br />
<li>Only priviledged accounts can create images of the 'hvm' virtualization-type.<br />
</ul><p>The result is that there are very few public images for cluster compute nodes, and up until today, there were no Ubuntu images.</p><p>I'm happy to announce that you can now run Official Ubuntu images on cluster compute instance types. From today forward we will be publishing daily builds of Natty Narwhal builds.</p><p>These images are identical to the other Ubuntu images. For AMI ids, you can browse the list at <a href="http://uec-images.ubuntu.com/server/natty/current/">http://uec-images.ubuntu.com/server/natty/current/</a>, or use more machine friendly data at <a href="http://uec-images.ubuntu.com/query">http://uec-images.ubuntu.com/query</a>.</p><p>There is one known bug (<a href="http://bugs.launchpad.net/bugs/690286">bug 690286</a>) that prevents you from using ephemeral storage on the CC nodes.</p><p>If you've got a couple dollars burning a whole in your pocket, you can try one out with:</p><code>
qurl="http://uec-images.ubuntu.com/query/" <BR>
ami_id=$(curl --silent "${qurl}/natty/server/daily.current.txt" | <BR>
awk '-F\t' '$11 == "hvm" && $7 == "us-east-1" { print $8 }')<BR>
ec2-run-instances --key mykey --instance-type cc1.4xlarge "${ami_id}"
</code>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-4223569142144517802010-12-07T07:33:00.000-08:002010-12-07T07:33:21.668-08:00lvm resizing is easyI have a local mirror of the ubuntu archive, using some scripts based on the <a href="https://wiki.ubuntu.com/Mirrors/Scripts">Ubuntu Wiki</a>. When I set up "/archive" on my local mirror, I used lvm. The reason for that was primarily so that I could use <a href="https://help.ubuntu.com/community/SbuildLVMHowto">sbuild with lvm</a>. <br />
<br />
Since then, 2 things have happened:<br />
<ul> <li>sbuild has gained the ability to use aufs rather than LVM snapshots. The solution is much lighter weight, and doesn't require some lvm space sitting around waiting to be used.</li>
<li>The ubuntu archive has grown in size from ~ 250G to ~ 400G right now.</li>
</ul><br />
So, it was time for me to resize the filesystem that had my archive up to accommodate. For my own record, and possibly others, I thought I'd share what I did.<br />
<br />
<code><br />
$ sudo pvscan<br />
PV /dev/sdb VG smoser-vol1 lvm2 [931.51 GiB / 315.57 GiB free]<br />
PV /dev/sda1 VG nelson lvm2 [148.77 GiB / 44.00 MiB free]<br />
$ sudo lvscan<br />
ACTIVE '/dev/smoser-vol1/smlv0' [585.94 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/hardy_chroot-i386' [5.00 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/lucid_chroot-i386' [5.00 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/karmic_chroot-i386' [5.00 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/karmic_chroot-amd64' [5.00 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/lucid_chroot-amd64' [5.00 GiB] inherit<br />
ACTIVE '/dev/smoser-vol1/hardy_chroot-amd64' [5.00 GiB] inherit<br />
ACTIVE '/dev/nelson/root' [142.65 GiB] inherit<br />
ACTIVE '/dev/nelson/swap_1' [6.07 GiB] inherit<br />
</code><br />
<br />
I had 2 physical volumes, sdb and sda1. 'sdb' had my old sbuild snapshots on it, and also some free space. So, I deleted sbuild snapshots with:<br />
<br />
<code><br />
$ sudo lvremove /dev/smoser-vol1/hardy_chroot-i386 \<br />
/dev/smoser-vol1/lucid_chroot-i386 /dev/smoser-vol1/karmic_chroot-i386 \<br />
/dev/smoser-vol1/karmic_chroot-amd64 /dev/smoser-vol1/lucid_chroot-amd64 \<br />
/dev/smoser-vol1/hardy_chroot-amd64<br />
</code><br />
<br />
Then, resized the 'smlv0' volume that had my '/archive' on it up to the largest that I could on that physical volume:<br />
<br />
<code><br />
$ sudo vgdisplay smoser-vol1<br />
VG Name smoser-vol1<br />
System ID <br />
Format lvm2<br />
<snip><br />
VG Size 931.51 GiB<br />
...<br />
$ sudo lvresize /dev/smoser-vol1/smlv0 --size 931.51G<br />
Rounding up size to full physical extent 931.51 GiB<br />
Extending logical volume smlv0 to 931.51 GiB<br />
Logical volume smlv0 successfully resized<br />
</code><br />
<br />
Then, just resize the ext4 filesystem on that volume:<br />
<code><br />
$ grep archive /proc/mounts<br />
/dev/mapper/smoser--vol1-smlv0 /archive ext4 rw,relatime,barrier=1,data=ordered 0 0<br />
$ sudo resize2fs /dev/mapper/smoser--vol1-smlv0<br />
resize2fs 1.41.11 (14-Mar-2010)<br />
Filesystem at /dev/mapper/smoser--vol1-smlv0 is mounted on /archive; on-line resizing required<br />
old desc_blocks = 37, new_desc_blocks = 59<br />
Performing an on-line resize of /dev/mapper/smoser--vol1-smlv0 to 244190208 (4k) blocks.<br />
<br />
The filesystem on /dev/mapper/smoser--vol1-smlv0 is now 244190208 blocks long.<br />
</code><br />
<br />
That last operation did take probably 30 minutes, but in the end, I now have:<br />
<br />
<code><br />
$ df -h /archive/<br />
Filesystem Size Used Avail Use% Mounted on<br />
/dev/mapper/smoser--vol1-smlv0<br />
917G 544G 327G 63% /archive<br />
</code>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com1tag:blogger.com,1999:blog-8612900155488851021.post-37500494462414480722010-11-03T13:47:00.000-07:002011-01-20T10:20:18.364-08:00Using Ubuntu Images on AWS "Free Tier"<p><B>[Update 2011-01-20]</B><br />
<blockquote><B><br />
There are now official Ubuntu AMIs that fit into the Free Tier disk requirements. You can get a list of the AMIs for <A HREF="http://uec-images.ubuntu.com/releases/lucid">10.04</A> or <A HREF="http://uec-images.ubuntu.com/releases/maverick">10.10</A>.<br />
</B><br />
This article is still useful as documentation, but is not necessary if you only want to use Ubuntu on Amazon's Free Tier.<br />
</blockquote></p>Amazon AWS recently announced an <a href="http://aws.amazon.com/free/">AWS Free Usage Tier</a>. The summary of which is that new AWS customers can run a t1.micro instance 24x7 for the next year and pay nothing (or at least very little).<br />
<br />
There are various restrictions on what you get for free, but the most interesting to the Ubuntu images is:<br />
<blockquote>10 GB of Amazon Elastic Block Storage, plus 1 million I/Os, 1 GB of snapshot storage, 10,000 snapshot Get Requests and 1,000 snapshot Put Requests*<br />
</blockquote><br />
The Official Ubuntu Images have a 15GB root filesystem. What that means is if you're using any of our official images (<a href="http://uec-images.ubuntu.com/releases/lucid">10.04</a>, <a href="http://uec-images.ubuntu.com/releases/maverick">10.10</a>), then you will be charged for 5GB of provisioned storage per month. In the us-east-1 region that would be $0.50/month. In other regions, that would be $0.55/month.<br />
<br />
This issue has been <a href="http://developer.amazonwebservices.com/connect/thread.jspa?messageID=201038𱅎">raised</a> on the <a href="http://developer.amazonwebservices.com/connect/category.jspa?categoryID=3">AWS Discussion Forums</a>, but it seems like Amazon is not willing to budge.<br />
<br />
Similarly, bug <a href="https://bugs.launchpad.net/ubuntu-on-ec2/+bug/670161">670161</a> was opened requesting "10GB root partition for EBS boot AMIs on EC2". If you're interested in following this discussion, subscribe yourself to that bug. I will make sure that it is kept up to date.<br />
<br />
I don't want to comment right now on whether or not we will release future EBS root AMIs of 10.04 and and 10.10 with a 10GB filesystem instead of a 15G filesystem. What I <i>do</i> want to discuss is how you can create your own AMI that has a 10G (or smaller) root filesystem, which will perform otherwise identically to the official images.<br />
<br />
If you want to use Ubuntu on the Amazon Free Tier *right now*, then you can follow these instructions, which assume you have the ec2-api-tools correctly configured on your laptop. And you have a keypair named "mykey" available in the target region.<br />
<br />
In the code (shell prompt) snippits below, '$' prompt indicates command run on my laptop. '%' prompt indicates command run on the ec2 instance. lines beginning with a '#' are comments.<br />
<br />
Launch an instance to work with:<br />
<code><br />
# us-east-1 ami-548c783d canonical ebs/ubuntu-maverick-10.10-amd64-server-20101007.1<br />
$ ec2-run-instances --region us-east-1 --instance-type t1.micro \<br />
--key mykey ami-548c783d<br />
$ iid=i-1855ea75<br />
$ zone=$(ec2-describe-instances $iid |<br />
awk '-F\t' '$2 == iid { print $12 }' iid=${iid} )<br />
$ echo ${zone}<br />
us-east-1d<br />
$ host=$(ec2-describe-instances $iid |<br />
awk '-F\t' '$2 == iid { print $4 }' iid=${iid} )<br />
$ echo ${host}<br />
ec2-174-129-61-12.compute-1.amazonaws.com<br />
</code><br />
<br />
create a volume in correct zone of the desired size to attach to the instance. Change '10' to '5' if you wanted a 5GB root filesystem.<br />
<code><br />
$ ec2-create-volume --size 10 --availability-zone ${zone}<br />
$ vol=vol-c64d55af<br />
$ ec2-attach-volume --instance ${iid} --device /dev/sdh ${vol}<br />
</code><br />
<br />
Then, ssh to ubuntu@${host}, and download the uec reference image and extract it. Below, I've downloaded the i386 image for maverick. You could browse through at <a href="http://uec-images.ubuntu.com/releases">http://uec-images.ubuntu.com/releases/10.10/release/</a> and find an amd64 image or a 10.04 base image.<br />
<code><br />
% sudo chown ubuntu:ubuntu /mnt<br />
% cd /mnt<br />
% url=http://uec-images.ubuntu.com/releases/10.10/release/ubuntu-10.10-server-uec-i386.tar.gz<br />
% tarball=${url##*/}<br />
% wget ${url} -O ${tarball}<br />
% tar -Sxvzf ${tarball}<br />
maverick-server-uec-i386.img<br />
maverick-server-uec-i386-vmlinuz-virtual<br />
maverick-server-uec-i386-loader<br />
maverick-server-uec-i386-floppy<br />
README.files<br />
% img=maverick-server-uec-i386.img<br />
% mkdir src target<br />
</code><br />
<br />
create target filesystem, mount the attached volume, and copy source filesystem contents to target filesystem using rsync.<br />
<code><br />
% sudo mount -o loop,ro ${img} /mnt/src<br />
% sudo mkfs.ext4 -L uec-rootfs /dev/sdh<br />
% sudo mount /dev/sdh /mnt/target<br />
# the rsync could take quite a while. for me it took 22 seconds.<br />
% sudo rsync -aXHAS /mnt/src/ /mnt/target<br />
% sudo umount /mnt/target<br />
% sudo umount /mnt/src<br />
</code><br />
<br />
Now, back on the laptop, snapshot the volume.<br />
<code><br />
$ ec2-create-snapshot ${vol}<br />
$ snap=snap-b97dfdd3<br />
# now you have to wait for snapshot to be 'completed'<br />
$ ec2-describe-snapshots ${snap}<br />
SNAPSHOT snap-b97dfdd3 vol-c64d55af completed 2010-11-03T17:31:52+0000 100% 950047163771 10<br />
</code><br />
<br />
Turn the contents of that volume into an AMI. Note, you must set 'arch', 'rel', and 'region' correctly. Then, we use that information to get the aki associated with the most recent released Ubuntu image.<br />
<br />
<code><br />
$ rel=maverick; region=us-east-1; arch=i386; # arch=amd64<br />
$ [ $arch = amd64 ] && xarch=x86_64 || xarch=${arch}<br />
$ qurl=http://uec-images.ubuntu.com/query/${rel}/server/released.current.txt<br />
$ aki=$(curl --silent "${qurl}" |<br />
awk '-F\t' '$5 == "ebs" && $6 == arch && $7 == region { print $9 }' \<br />
arch=$arch region=$region )<br />
$ echo ${aki}<br />
aki-407d9529<br />
$ ec2-register --snapshot ${snap} \<br />
--architecture=${xarch} --kernel=${aki} \<br />
--name "my-ubuntu-${rel}" --description "my-ubuntu-${rel}"<br />
IMAGE ami-4483742d<br />
$ ami=ami-4483742d<br />
</code><br />
<br />
Clean up your instance and volume<br />
<code><br />
$ ec2-detach-volume ${vol}<br />
$ ec2-terminate-instances ${iid}<br />
$ ec2-delete-volume ${vol}<br />
</code><br />
<br />
And now run your instance<br />
<code><br />
$ ec2-run-instances --instance-type t1.micro ${ami}<br />
$ ssh ubuntu@<new-host-id><br />
% sudo apt-get update && sudo apt-get dist-upgrade<br />
# if you got a new kernel (linux-virtual package), then you will<br />
# need to reboot<br />
% sudo reboot<br />
</code><br />
<br />
Now, your newly created image has filesystem contents that are identical to those of the official Ubuntu images, but with a 10G filesystem.<br />
<br />
Once you've launched your image, you can actually clean up the snapshot and the AMI id that you launched. To do that:<br />
<code><br />
ec2-deregister ${ami}<br />
ec2-delete-snapshot ${snap}<br />
</code><br />
<br />
The cost of the above operations will probably be on the order of pennies, and will remove the costs you would have incurred due to having 15G root volume.Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com17tag:blogger.com,1999:blog-8612900155488851021.post-83329087883126025812010-11-03T11:58:00.000-07:002010-11-03T13:10:39.922-07:00create image with XFS root filesystem from UEC ImagesA <a href="http://groups.google.com/group/ec2ubuntu/t/7ceab2d515cd3c53">post</a> was made to the <a href="http://groups.google.com/group/ec2ubuntu">ec2ubuntu google group</a> asking if Ubuntu had any plans to create images XFS root filesystems.<br />
<br />
The <a href="http://uec-images.ubuntu.com/releases">Official Ubuntu Images</a> for 10.04 LTS (lucid) and prior have an ext3 root filesystem. For Ubuntu 10.10 (maverick) and the development builds of 11.04, the filesystem is ext4. <br />
<br />
The selection of ext3 or ext4 is because ext4 is the filesystem selected by default on an Ubuntu install from CD or DVD. This selection is carried over to our Ubuntu images for UEC or EC2. The 10.04 images really should have been ext4, but the change didn't get in for that release.<br />
<br />
Ubuntu fully supports the XFS filesystem, it simply wasn't chosen as the default. The -virtual kernel has filesystem support available as a module, and the xfsprogs package is in the main archive.<br />
<br />
So, just as you can get full support for the Ubuntu images using ext4, you can get full support from Ubuntu (and paid support from Canonical) by using xfs as your root filesystem. You will simply have to create your own images.<br />
<br />
Luckily, due primarily due to the fact that the Ubuntu images are downloadable at <a href="http://uec-images.ubuntu.com">http://uec-images.ubuntu.com</a>, the process for creating a XFS based ebs image is trivial.<br />
<br />
In the code (shell prompt) snippits below, '$' prompt indicates command run on my laptop. '%' prompt indicates command run on the ec2 instance. lines beginning with a '#' are comments.<br />
<br />
Launch an instance to work with:<br />
<code><br />
# us-east-1 ami-688c7801 canonical ubuntu-maverick-10.10-amd64-server-20101007.1<br />
$ ec2-run-instances --region us-east-1 --instance-type m1.large \<br />
--key mykey ami-688c7801<br />
$ iid=i-bcc679d1<br />
$ zone=$(ec2-describe-instances $iid |<br />
awk '-F\t' '$2 == iid { print $12 }' iid=${iid} )<br />
$ echo ${zone}<br />
us-east-1d<br />
$ host=$(ec2-describe-instances $iid |<br />
awk '-F\t' '$2 == iid { print $4 }' iid=${iid} )<br />
$ echo ${host}<br />
ec2-174-129-61-12.compute-1.amazonaws.com<br />
</code><br />
<br />
<br />
create a volume in correct zone of the desired size to attach to the instance.<br />
<code><br />
$ ec2-create-volume --size 10 --availability-zone ${zone}<br />
$ vol=vol-c64d55af<br />
$ ec2-attach-volume --instance ${iid} --device /dev/sdh ${vol}<br />
</code><br />
<br />
Then, ssh to ubuntu@${host}, and download the uec reference image, extract, and get necessary packages:<br />
<code><br />
% sudo chown ubuntu:ubuntu /mnt<br />
% cd /mnt<br />
% url=http://uec-images.ubuntu.com/releases/10.10/release/ubuntu-10.10-server-uec-i386.tar.gz<br />
% tarball=${url##*/}<br />
% wget ${url} -O ${tarball}<br />
% tar -Sxvzf ${tarball}<br />
maverick-server-uec-i386.img<br />
maverick-server-uec-i386-vmlinuz-virtual<br />
maverick-server-uec-i386-loader<br />
maverick-server-uec-i386-floppy<br />
README.files<br />
% img=maverick-server-uec-i386.img<br />
% mkdir src target<br />
% sudo apt-get install xfsprogs<br />
</code><br />
<br />
create target filesystem, mount the attached volume, and copy source filesystem contents to target filesystem using rsync.<br />
<code><br />
% sudo mount -o loop,ro ${img} /mnt/src<br />
% sudo mkfs.xfs -L uec-rootfs /dev/sdh<br />
% sudo mount /dev/sdh /mnt/target<br />
% sudo rsync -aXHAS /mnt/src/ /mnt/target<br />
% sudo umount /mnt/target<br />
% sudo umount /mnt/src<br />
</code><br />
<br />
Above, you could have mounted /proc and /sys into /mnt/target, chrooted into it and done a dist-upgrade. I left that out for simplicity.<br />
<br />
Now, back on the laptop, snapshot the volume.<br />
<code><br />
$ ec2-create-snapshot ${vol}<br />
$ snap=snap-b97dfdd3<br />
# now you have to wait for snapshot to be 'completed'<br />
$ ec2-describe-snapshots ${snap}<br />
SNAPSHOT snap-b97dfdd3 vol-c64d55af completed 2010-11-03T17:31:52+0000 100% 950047163771 10<br />
</code><br />
<br />
Turn the contents of that volume into an AMI. Note, you must set 'arch', 'rel', and 'region' correctly. Then, we use that information to get the aki associated with the most recent released Ubuntu image.<br />
<br />
<code><br />
$ rel=maverick; region=us-east-1; arch=i386; # arch=amd64<br />
$ [ $arch = amd64 ] && xarch=x86_64 || xarch=${arch}<br />
$ [ $arch = amd64 ] && blkdev=/dev/sdb || blkdev=/dev/sda2<br />
$ qurl=http://uec-images.ubuntu.com/query/${rel}/server/released.current.txt<br />
$ aki=$(curl --silent "${qurl}" |<br />
awk '-F\t' '$5 == "ebs" && $6 == arch && $7 == region { print $9 }' \<br />
arch=$arch region=$region )<br />
$ echo ${aki}<br />
aki-407d9529<br />
$ ec2-register --snapshot ${snap} \<br />
--architecture=${xarch} --kernel=${aki} \<br />
--block-device-mapping ${blkdev}=ephemeral0 \<br />
--name "my-${rel}-xfs-root" --description "my-${rel}-xfs-description"<br />
IMAGE ami-4483742d<br />
$ ami=ami-4483742d<br />
</code><br />
<br />
Clean up your instance and volume<br />
<code><br />
$ ec2-detach-volume ${vol}<br />
$ ec2-terminate-instances ${iid}<br />
$ ec2-delete-volume ${vol}<br />
</code><br />
<br />
And now run your instance<br />
<code><br />
$ ec2-run-instances --instance-type t1.micro ${ami}<br />
</code><br />
<br />
ssh to your instance, verify that it is in fact xfs:<br />
<code><br />
% grep uec-rootfs /proc/mounts <br />
/dev/disk/by-label/uec-rootfs / xfs rw,relatime,attr2,nobarrier,noquota 0 0<br />
</code><br />
<br />
Now, your newly created image has filesystem contents that are identical to those of the official Ubuntu images.<br />
<br />
Some notes on the above:<br />
<ul><li>Many people believe that transition to btrfs as the default filesytem is inevitable, possibly even for the 12.04 LTS release. Doing this on EC2 would require that amazon release btrfs support in a pv-grub kernel.</li>
<li>Outside of creating a 'xfs' filesystem, the steps above are very generic "create a custom EBS root image" instructions. In fact, the process outlined above is used for the actual publishing of ebs images via the <a href="http://bazaar.launchpad.net/~ubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts/files">ec2-publishing-scripts</a> (see <a href="http://bazaar.launchpad.net/~ubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts/annotate/head%3A/ec2-image2ebs">ec2-image2ebs</a>)</li>
.
<li>The process above will work using the maverick based images. Lucid images are not likely to work out of the box because they do not boot with a ramdisk. Where maverick images use pv-grub to load the kernel and ramdisk from inside the image, Lucid kernels are loaded by xen directly and Canonical did not publish ramdisks for the lucid release.</li>
</ul>Scott Moserhttp://www.blogger.com/profile/01336409131491231474noreply@blogger.com6