Thursday, November 17, 2011
Monday, August 29, 2011
RFC: anyone using vmdk or ovf files from cloud-images?
If you are using the OVF files or VMDK files, please respond.
Date: Fri, 26 Aug 2011 16:34:41 From: Scott Moser Subject: Anyone using the .ovf and/or .vmdk files on cloud-images? Hey all, Is anyone using the .vmdk or .ovf files on http://cloud-images.ubuntu.com [1] ? In the 11.04 cycle, I started building .ovf files with corresponding .vmdk images. The goal of this was to make Ubuntu available for use via software that supported importing OVF. I chose to to create the disk as VMDK formated images rather than a more open-source friendly format such as qcow. The OVF file and associated disk image is consumable by both vmware tools and by VirtualBox. There are no OVF consumers that I'm aware of that would function with a qcow (or even 'raw') disk image. I feel compelled to also mention that this format of vmdk (compressed) is not supported by qemu-img or kvm. Since having an OVF that could not be used by any software is not significantly more useful than not having an OVF, we went with vmdk. Largely prompted by the interest in providing a download format that is ideal for OpenStack users [2], I am re-considering my decision. I would like to avoid adding another full disk image format to the output of the build scripts. As a result I'm thinking about replacing the .vmdk images with compressed qcow images and updating the OVF files to reference those. The reason behind not wanting to just add yet another deliverable is that more downloads are confusing, and disk space is not necessarily free. If I can drop a deliverable not used by anyone, then I'd like to do that. So, would anyone object to the removal of .vmdk files from cloud-images? Is anyone using these that could not just as easily use qcow2 formated images? Thanks, Scott -- [1] http://cloud-images.ubuntu.com/server/oneiric/current/ [2] https://bugs.launchpad.net/ubuntu/+bug/833265
Tuesday, August 9, 2011
Amazon issues with EBS affect Ubuntu images in the EU-WEST region
Note: This blog post has been updated in-place.
We have received information from Amazon that the EBS snapshots for our released 10.04 images from 20110719 were not affected (ami-5c417128 and ami-52417126). It seems that an api issue incorrectly marked them as such. It was an error in our logic that associated snapshot-ids with amis that gave us the incorrect output. The only Ubuntu images that were affected were old daily builds and milestone releases. If you are interested in reading the original message, please do so on the Ubuntu cloud-announce mailing list archives.
We received this morning an automated email[1] from Amazon informing us of possible loss of data in EBS snapshots on the EU-WEST-1 region. Our engineering team immediately started an assessment of the damages this might have caused to the EBS images that we publish for our users. We are working with Amazon to re-mediate customer impact and prevent any future outages.
A number of non-current daily build and old alpha or beta images have been affected, but we hope that no one would have been using these images for production use; we are not planning corrective actions for these images. You can see the full list of AMIs affected at http://paste.ubuntu.com/662210/.
To have this type of announcements sent to your email directly, please subscribe to our ubuntu-cloud-announce mailing list at https://lists.ubuntu.com/mailman/listinfo/ubuntu-cloud-announce..
Our support services are available to help customers of the Ubuntu Advantage Cloud Guest program. Details about this program can be found at http://www.canonical.com/enterprise-services/ubuntu-advantage/cloud[1] Email received from Amazon on Aug 9 2011 at 9:11 UTC
Hello,
We've discovered an error in the Amazon EBS software that cleans up unused snapshots. This has affected at least one of your snapshots in the EU-West Region.
During a recent run of this EBS software in the EU-West Region, one or more blocks in a number of EBS snapshots were incorrectly deleted. The root cause was a software error that caused the snapshot references to a subset of blocks to be missed during the reference counting process. This process compares the blocks scheduled for deletion to the blocks referenced in customer snapshots. As a result of the software error, the EBS snapshot management system in the EU-West Region incorrectly thought some of the blocks were no longer being used and deleted them. We've addressed the error in the EBS snapshot system to prevent it from recurring.
We have now disabled all of your snapshots that contain these missing blocks. You can determine which of your snapshots were affected via the AWS Management Console or the DescribeSnapshots API call. The status for any affected snapshots will be shown as "error."
We have created copies of your affected snapshots where we've replaced the missing blocks with empty blocks. You can create a new volume from these snapshot copies and run a recovery tool on it (e.g. a file system recovery tool like fsck); in some cases this may restore normal volume operation. These snapshots can be identified via the snapshot Description field which you can see on the AWS Management Console or via the DescribeSnapshots API call. The Description field contains "Recovery Snapshot snap-xxxx" where snap-xxx is the id of the affected snapshot. Alternately, if you have any older or more recent snapshots that were unaffected, you will be able to create a volume from those snapshots without error. For additional questions, you may open a case in our Support Center: https://aws.amazon.com/support/createCase
We apologize for any potential impact this might have on your applications.
Sincerely,
AWS Developer Support
This message was produced and distributed by Amazon Web Services LLC, 410 Terry Avenue North, Seattle, Washington 98109-5210
Friday, July 29, 2011
How to find the right Ubuntu AMI with tools
When we publish Ubuntu AMIs, we simultaneously publish machine consumable data to https://cloud-images.ubuntu.com/query/. The data there contains information so that you can:
- Find the latest ami of a given type (hvm/ebs/instance-store), arch, and region.
- Download the pristine image files
I think the format of the data is generally discernible, but there is some more information on the Ubuntu Wiki.
I've put an example client together. Here is some example usage:
- Launch the latest released image in us-east-1
- Open the Amazon EC2 console to launch the latest Oneiric daily build
- Download and extract the latest tarball for lucid Here, 'pubname' is the recommended "publish name" of this AMI, which happens to correspond to the basename of the name on EC2, and "url" is a fully qualified url to http://cloud-images.ubuntu.com .
$ euca-run-instances --instance-type t1.micro --key mykey $(ubuntu-ami)
You can now directly link to launching an image in Amazon EC2 console, combine that with this tool to open your browser to the right page.
$ ami=$(ubuntu-ami us-west-1 oneiric daily i386)
$ gnome-open https://console.aws.amazon.com/ec2/home?region=us-west-1#launchAmi=${ami}
$ wget $(ubuntu-ami -f "%{url} -O %{pubname}.tar.gz")
$ uec-publish-tarball *.tar.gz my-ubuntu-images
I don't think I'll get this into 11.10, but I'd like to have something with this function into 12.04, and support launching AMIs directly through it for ease of use. I'd love to hear input on what you'd like a "ubuntu-run-instance" command to look like and do.
Monday, July 25, 2011
Updated AWS tools PPA for Ubuntu
Right now the ppa has the following packages:
- ec2-api-tools : Amazon's EC2 command line tools
- ec2-ami-tools : Amazon's EC2 AMI tools (rebundling and uploading images)
- iamcli : Identity Access Management (IAM) Command Line Toolkit
- rdscli : Command Line Toolkit for the Amazon Relational Database Service
To add this repository its as easy as:
$ sudo apt-add-repository ppa:awstools-dev/awstools
$ sudo apt-get update
Then, to install the newest available version of ec2-api-tools, do:
$ sudo apt-get install ec2-api-tools
I hope hope that is helpful.
Monday, July 18, 2011
How to find the right Ubuntu AMI on EC2
The purpose of this post is to document how you can easily, quickly and safely find the Official Ubuntu AMIs on EC2 via the Amazon EC2 console or via your web browser.
Some General Ubuntu Information
You already may be aware of these items, but I want to point them out for those who are just getting started with Ubuntu or EC2.- Ubuntu releases every 6 months. Each release has a version number and a codename. The most important thing to note here is that every 2 years a LTS (Long Term Support) release is made. If you want stability and support for 5 years, select an LTS release. If you want the newest packages, select the most recent release. See the wikipedia entry for more information.
- At the time of this writing, there are 5 "regions" in Amazon EC2. Each region represents a geographical location. Each region has its own AMI ids. Inside each region there are 2 architectures (x86_64, i386) and 2 "root store" types (EBS or instance). That means that for each build Ubuntu releases, we generate 20 ami ids.
Easiest: Find AMIs From Your Web Browser
You can choose your interface for selecting images. Go to either:- http://cloud.ubuntu.com/ami At the bottom of this page, you can select the region, release, arch or root-store. You're only shown the most recent releases here. When you've made your selection, you can copy and paste the ami number, or just click on it to go right to the EC2 console launch page for that AMI. or
- https://cloud-images.ubuntu.com/server/releases/
- Select Your release by number or code-name
- Select 'release/': We keep historical builds around for debugging, but the 'release/' directory will always be the latest.
- Select your AMI from the table and click to launch in the console or copy and paste a command line.
Search through the Amazon EC2 Console
The EC2 Console is a graphical way to sort through AMIs and select one to launch. To Launch an Official Ubuntu Image here, follow the steps below.- Select the region you want in the top left, under 'Navigation' Example: "Us East (Virginia)"
- Click "AMIs" Do not click "Launch Instance", see note below
- for 'Viewing', select "All Images"
- Limit the results to Ubuntu Stable Release images by typing ubuntu-images/ You should expand the 'AMI Name' field as wide as possible (maybe shrink the others).
- Limit the results to a specific release by appending '.*
'.
For example: ubuntu-images/.*10.04
- Limit the results to a given arch by appending '.*i386' or '.*amd64' Note: If you want to run a m1.small or c1.medium, you need 'i386'. If you want to run a t1.micro, you will need to select an 'ebs' image.
- Sort your results by AMI Name and make selection By sorting by AMI name, you can more easily see the newest AMI for a given set. Each AMI ends with a number in the format YYYYMMDD (year,month,day). You want the most recent one.
- Verify the Owner is 099720109477!
Any user can register an AMI under any name. Nothing prevents a malicious user from registering an AMI that would match the search above. So, in order to be safe, you need to verify that the owner of the ami is '099720109477'.
If "Owner" is not a column for you, click "Show/Hide" at the top right and select "Owner" to be shown.
- Click on the AMI name, then Click 'Launch'
Notes
- HTTPS Access Of the options above, right now https://cloud-images.ubuntu.com/server/releases/ is the only one that provides data over https. This may be important to you if you are concerned about potential "Man in the Middle" attacks when finding a AMI id. I've requested Ahmed [kim0 in irc] to support https access to https://cloud.ubuntu.com/ami .
- Web Console 'Launch Instance' dialog I saw no way in the 'Launch Instance' dialog to see the Owner ID. Because if this, I suggest not using that dialog to find "Community AMIs". There is simply no way you can reliably know who the owner of the image is from within the console. For advanced users, I will blog sometime soon on a way to find AMIs programmatically [Hint].
Friday, July 15, 2011
Getting a larger root volume on a cluster compute instance
On Cluster Compute instances, its a bit more difficult. The cluster compute instances have their root filesystem in a partition inside of the disk attached. Thats all well and good, and most likely a partitioned disk is more familiar to you than one that is not partitioned. The cluster compute images have grub2 installed in the MBR of that disk.
The problem with the partition on the disk is that you can no longer simply launch the instance with a larger root volume and then 'resize2fs /dev/sda1'. This is because the kernel won't re-read the partition table of a disk until all of its partitions are unmounted. For the disk that holds your root partition, that means you basically have to reboot after a change.
To avoid that waste of precious time, we've included a utility called 'growpart' inside of the initramfs on the Ubuntu images. It is invoked by the 'cloud-initramfs-growroot' package. This code runs before the root fileystem is busy, so the request for the kernel to re-read the partition table will work without requiring a reboot. To try it out, do:
# us-east-1 ami-1cad5275 hvm/ubuntu-natty-11.04-amd64-server-20110426
$ ec2-run-instances --region us-east-1 --instance-type cc1.4xlarge \
--block-device-mapping /dev/sda1=:20 ami-1cad5275
When you get to the instance, you'll have a 20G filesystem on /. And, if you're interested enough to look in the console output, you'll see something like:
GROWROOT: CHANGED: partition=1 start=16065 old: size=16755795 end=16771860 new: size=41913585,end=41929650
Wednesday, March 2, 2011
start a 11.04 instance with a larger root filesystem
If you find that space somewhat limiting, it is easy to give yourself a larger root volume at instance creation time. Its easy
Launch the instance with appropriate block-device-mapping arguments
$ ec2-run-instances $AMI --key mykey --block-device-mapping /dev/sda1=:20
That will create you an instance with a 20G root volume. However the filesystem on that volume will still only occupy 8G of the space. Essentially, you'd have 12G of unused volume at the end of the disk.
Resize the root volume
if you've launched an 11.04 based image newer than alpha-2, this step is not necessary. Cloud-init will do it for you. It is just assumed that you want your root filesystem to fill all space on its partition. I honestly cannot think of a reason why you would not want that.
Now, if you are using 10.04 or 10.10 images, you can resize your root volume easily enough after the boot. Just login, and issue:
$ sudo resize2fs /dev/sda1
That operation should take only a few seconds or less, and you'll then have all the space you need.
Thursday, February 3, 2011
Migrating to pv-grub kernels for kernel upgrades
After the release of the Ubuntu 10.04 LTS images that use the pv-grub kernel, there were some questions on what to do if you're running from an older AMI and want to take advantage of what pv-grub offers.
Heres a short list of how you might be affected:
- If you are running an EBS root instance launched from or rebundled from an Official Ubuntu image of 10.04 LTS (Lucid Lynx) released 20110201.1 or later, 10.10 (Maverick Meerkat), Natty Narwhal, or later, then you do not need to do anything. You already are using pv-grub, and can simply apply software updates and reboot to get new kernel updates. You can stop reading now.
- If you are running an instance-store instance that is not using a pv-grub kernel, there is nothing you can do. There is simply no way to change the kernel of an instance store instance.
- If you are running an EBS-root based instance rebundled from a Ubuntu 9.10 (Karmic Koala) or older, then there is currently no supported path to getting kernel upgrades. There were no officially released EBS-root based Ubuntu images of 9.10, and with Karmic's end of life coming in April, there is not likely to be support for this new feature.
- If you are running an EBS-root instance launched from or rebundled from an official Ubuntu release of 10.04, read on.
Updating a 10.04 based image basically entails 2 steps, setting up /boot/grub/menu.lst, and then modifying your instance to have a pv-grub kernel.
Step 1: installing grub-legacy-ec2.
If you launched or rebundled your instance from an Ubuntu 10.04 numbered 20101020 or earlier, you need to do this step. If you started from a release of 20101228 you can skip this step.
- Apply software updates.
Depending on how out of date you are, this might take a while.
sudo apt-get update && sudo apt-get dist-upgrade
- Install grub-legacy-ec2
The 'grub-legacy-ec2' package is what the images use to manage /boot/grub/menu.lst. If you had used Ubuntu prior to the default selection of grub2, you will be familiar with how it works. grub-legacy-ec2 is basically just the menu.lst managing portion of the Ubuntu grub 0.97 package with some EC2 specifics thrown in.
To get a functional /boot/grub/menu.lst, all you have to do is:
sudo apt-get install grub-legacy-ec2
Step 2: modifying the instance to use pv-grub kernels
Now, your images should have a functional /boot/grub/menu.lst, and grub-legacy-ec2 should be properly installed such that future kernels will get automatically added and selected on reboot. However, you have to change your instance to boot using pv-grub rather than the old kernel aki that you originally started with.- Shut down the instance
The best way to do this is probably to just issue '/sbin/poweroff' inside the instance. Alternatively, you could use the ec2 api tools, or do so from the AWS console.
% sudo /sbin/poweroff
- Modify the instance's kernel to be a pv-grub kernel
Once the instance is moved to "stopped" state, you can modify its kernel to be a pv-grub kernel. The kernel you select depends on the arch and region. See the table below for selecting which you should use:
region arch aki id ap-southeast-1 x86_64 aki-11d5aa43 ap-southeast-1 i386 aki-13d5aa41 eu-west-1 x86_64 aki-4feec43b eu-west-1 i386 aki-4deec439 us-east-1 x86_64 aki-427d952b us-east-1 i386 aki-407d9529 us-west-1 x86_64 aki-9ba0f1de us-west-1 i386 aki-99a0f1dc
Then, assuming you have $AKI represents the appropriate aki above, and $IID represents your instance id, and $REGION represents your region, you can update the instance and then start it with:
$ ec2-modify-instance-attribute --region ${REGION} --kernel ${AKI} ${IID}
$ ec2-start-instances --region ${REGION} ${IID}
Your instance will start with a new hostname/IP address, so get that out of describe-instances and ssh to your instance. You can check that it has worked by looking at /proc/cmdline. Your kernel command line should look something like this:
$ cat /proc/cmdline
root=UUID=7233f657-c156-48fe-8d60-31ae6400d0cf ro console=hvc0
In the future, your instance will now behave much more like a "normal server". If you apply software updates (apt-get dist-upgrade) and reboot, you'll boot into a fresh new kernel.
Getting ephemeral devices on EBS images
When you launch an instance in EC2, be it EBS root or instance-store, you are "entitled" to some amount of ephemeral storage. Ephemeral storage is basically just extra local disk that lives in the host machine where your instance will run. How much instance-store disk you are entitled to is described with the instance size descriptions.
The described amount of ephemeral storage is allocated to all instance-store instances. However, at register time of EBS-root instances the default "block-device-mapping" is set. Because an EBS-root AMI may be run with different instance-types, it is impossible to set the correct mapping for all scenarios. For the Ubuntu images, the default mapping only includes ephemeral0, which is present in all but t1.micro sizes.
So, if you want to "get all that you're paying for", you'll have to launch instances with different '--block-device-mapping' arguments to ec2-run-instances (or euca-run-instances). I've written a little utility named bdmapping to make that easier, and hold the knowledge.
You can use it to show the mappings for a given instance type:
$ ./bdmapping
--block-device-mapping=sdb=ephemeral0 --block-device-mapping=sdc=ephemeral1
Or, use it with command substitution when you launch an instance:
$ ec2-run-instances $(bdmapping -c m1.xlarge) --key mykey ami-c692ec94
#
# the above is the same as running:
# ec2-run-instances --block-device-mapping=sdb=ephemeral0 \
# --block-device-mapping=sdc=ephemeral1 --instance-type=m1.large \
# --key mykey ami-c692ec94
You can view or download 'bdmapping' at https://gist.github.com/809587
New Ubuntu 10.04 LTS images with pv-grub support
Yesterday I announced the availability of updated Ubuntu images on EC2 for 10.04 LTS (Lucid Lynx).
This post is mainly to mirror that one to a possibly wider audience, but also to go into more detail on the key change in these images:
You can now upgrade the kernel inside your 10.04 based EC2 image by running 'apt-get update && apt-get dist-upgrade && reboot!
Now, for a little more detail on that.
Around the time that 10.04 released, or shortly there after, Amazon released a new feature of EC2 as "Use Your Own Kernel with Amazon EC2". What this really meant was that there were now "kernels" on EC2 that were really versions of grub. By selecting this kernel, and providing a /boot/grub/menu.lst inside your image, the instance would be in control of the kernel booted. Previously, the loading of a kernel was done by the xen hypervisor, and could not be changed at all for a instance-store image, and only non-trivially for an EBS-root instance
Yes, you read that right, midway through the year of 2010 you were able to change the kernel that you were running in an EC2 instance with a software update and a reboot.
We took advantage of this support in our 10.10 images, but for many people, only LTS (Long Term Support) releases are interesting. So, to satisfy those people, we've brought the function into our 10.04 images.
If you're using our images, or have rebundled an image starting from them, I *strongly* suggest updating to the 20110201.1 images or anything later. You'll want to do that because
- You can receive kernel upgrades as normal software upgrades to your running instance.
- This is by far the most supportable route for upgrade from 10.04 to our next LTS (12.04). If your instance is not using the pv-grub kernel, and you want to upgrade to a newer release, you will have to upgrade, shutdown the instance, modify the associated kernel, and start up the instance. That is both more painful, and will result in larger downtime.
So, in short, grab our new images!
Tuesday, January 11, 2011
Failsafe and manual management of kernels on EC2
The file /boot/grub/menu.lst is managed by grub-legacy-ec2 package. The program 'update-grub-legacy-ec2' is called on installation of Ubuntu kernels through files that are installed in /etc/kernel/postinst.d and /etc/kernel/postrm.d.
By default, as with other Ubuntu systems, the kernel with the highest revision will be the behavior will be automatically selected as the default, and selected on the next boot. Because EC2 images is read-only, you may want to manually manage your selected kernel. This can be done by modifying /boot/grub/menu.lst to use the grub "fallback" code.
I'll launch an instance of the current released maverick (ami-ccf405a5 in us-east-1 ubuntu-maverick-10.10-i386-server-20101225). Then, on the instance, create hard links to the default kernel and ramdisk so even on apt removal, they'll stick around, and then change /boot/grub/menu.lst to use those kernels.
sudo ln /boot/vmlinuz-$(uname -r) /boot/vmlinuz-failsafe
sudo ln /boot/initrd.img-$(uname -r) /boot/initrd.img-failsafe
Then, copy the existing entry in /boot/grub/menu.lst to a new entry above the automatic section. I've changed/added:
# You can specify 'saved' instead of a number. In this case, the default entry
# is the entry saved with the command 'savedefault'.
# WARNING: If you are using dmraid do not use 'savedefault' or your
# array will desync and will not let you boot your system.
default saved
...<snip>...
# Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST
# this is the failsafe kernel, it will be '0' as it is the first
# entry in this file
title Failsafe kernel
root (hd0)
kernel /boot/vmlinuz-failsafe root=LABEL=uec-rootfs ro console=hvc0 FAILSAFE
initrd /boot/initrd.img-failsafe
savedefault
title Ubuntu 10.10, kernel 2.6.35-24-virtual
root (hd0)
kernel /boot/vmlinuz-2.6.35-24-virtual root=LABEL=uec-rootfs ro console=hvc0 TEST-KERNEL
initrd /boot/initrd.img-2.6.35-24-virtual
savedefault 0
And then update grub to store that the first kernel is the 'saved', which for grub 1 (or 0.97) modifies /boot/grub/default.
sudo grub-set-default 0
sudo reboot
Now, a reboot will boot into the failsafe kernel (which we can verify by checking /proc/cmdline) and see 'FAILSAFE'. Then, to test our "TEST-KERNEL", run:
sudo grub-set-default 1
sudo reboot
After this reboot, the system come up into "TEST-KERNEL" (per /proc/cmdline) but /boot/grub/default will contain '0', indicating that on subsequent boot, the FAILSAFE will run. In this way, if your kernel failed to boot all the way up, you can then just issue:
euca-reboot-instances i-15b77779
And you'll boot back into the FAILSAFE kernel.
The above basically allows you to manually manage your kernels while letting grub-legacy-ec2 still write entries to /boot/grub/menu.lst.
I chose to use hardlinks for the 'failsafe' kernels, so that even on dpkg removal, the files would still exist. Because the 10.10 Ubuntu kernels have the EC2 network and disk drivers built in, you'll still be able to boot even after a dpkg removal of the failsafe kernel or an errant 'rm -Rf /lib/modules/2*'
Friday, January 7, 2011
Using euca2ools rather than ec2-api-tools with EC2
In the UEC images, the most notable packages left out are 'ec2-api-tools' and 'ec2-ami-tools'. I personally use the ec2-api-tools and ec2-ami-tools quite frequently and Amazon has done a great job with them. However, the license and lack of source code prevents them from being in Ubuntu 'main'.
Fortunately
a.) There are packages made available in the Ubuntu 'multiverse' component.
b.) The euca2ools package is installed by default and provides an almost drop in replacement for the ec2-api-tools and ec2-ami-tools.
I think that many users of EC2 aren't aware of the euca2ools, so I'd like to give some information on how to use them here.
The ec2-api-tools use the SOAP interface and thus use the "EC2_CERT" and "EC2_PRIVATE_KEY". The euca2ools sit on top of the excellent boto project. Boto uses the AWS REST api, which means authentication is done with your "Access Key" and "Secret Key". As a result, configuration is a little different. (Note, bundling images, you still need the EC2_CERT and EC2_PRIVATE_KEY for encryption/signing).
Configuration for euca2ools can be done via environment variables (EC2_URL, EC2_ACCESS_KEY, EC2_SECRET_KEY, EC2_CERT, EC2_PRIVATE_KEY, S3_URL, EUCALYPTUS_CERT) or via config file. I personally prefer the configuration file approach.
Here is my ~/.eucarc that is configured to operate with the EC2 us-east-1 region.
CRED_D=${HOME}/creds/aws-smoser
EC2_REGION="${EC2_REGION:-us-east-1}"
EC2_CERT=${CRED_D}/cert.pem
EC2_PRIVATE_KEY=${CRED_D}/pk.pem
EC2_ACCESS_KEY=ABCDEFGHIJKLMNOPQRST
EC2_SECRET_KEY=UVWXYZ0123456789abcdefghijklmnopqrstuvwx
EC2_USER_ID=950047163771
EUCALYPTUS_CERT=/etc/ec2/amitools/cert-ec2.pem
EC2_URL=https://ec2.${EC2_REGION}.amazonaws.com
S3_URL=https://s3.amazonaws.com:443
Things to note above:
- euca2ools sources the ~/.eucarc file with bash, and then reads out the values of EC2_REGION, EC2_CERT, EC2_PRIVATE_KEY, EC2_ACCESS_KEY, EC2_USER_ID, EC2_URL, S3_URL. This means that you use other bash functionality in the config file as I've done above with 'EC2_REGION'. This allows me to do something like:
EC2_REGION=us-west-1 euca-describe-images
- If there is no configuration file specified with '--config', then those values will be read from environment variables
- Amazon's public certificate from the ami tools is included with euca2ools in ubuntu, and located in /etc/ec2/amitools/cert-ec2.pem
- Many of the euca2ools commands will run significantly faster than the ec2-api-tools. The reason for slowness of the ec2-api-tools is their man java dependencies (please correct me if I'm wrong).
- Your ~/.eucarc file contains credentials and therefore it should be protected with filesystem permissions (ie 'chmod go-r ~/.eucarc').