I had to try a few things before I could get this right, so I thought I'd write about it. These steps are what ultimately worked for me. I had tried several other things to no success, which I'll list at the end of the post.
If you have Elastic Compute Cloud (EC2) instances on the "previous generation
" paravirtualization based instance types, and want to convert them to the new/cheaper/faster "current generation
", HVM instance types with SSD storage, this is what you have to do:
You'll need a donor Elastic Block Store (EBS) volume so you can copy data from it. Either shutdown the old instance and detach the EBS, or, as I did, snapshot the old system, and then create a new volume from the snapshot so that you can mess up without worrying about losing data. (I was also moving my instances to a cheaper data center, which I could only do by moving snapshots around). If you choose to create a new volume, make a note of which Availability Zone (AZ) you create it in.
Create a new EC2 instance of the desired instance type, configured with a new EBS volume set up the way you want it. Use a base image that's as similar to what you currently have as possible. Make sure you're using the same base OS version, CPU type, and that your instance is in the same AZ as your donor EBS volume. I mounted the ephemeral storage too as a way to quickly rollback if I messed up without having to recreate the instance from scratch.
Attach your donor EBS volume to your new instance as sdf/xvdf, and then mount them to a new directory I'll call /donor
mkdir /donor && mount /dev/xvdf /donor
Suggested: Mount your ephemeral storage on /mnt
mount /dev/xvdb /mnt
and rsync / to /mnt
rsync -aPx / /mnt/
If something goes wrong in the next few steps, you can reverse it by running
rsync -aPx --delete /mnt/ /
to revert to known working state. The rsync options tell rsync to copy (a)ll files, links, and directories, and all ownership/permissions/mtime/ctime/atime values; to show (P)rogress; and to not e(x)tend beyond a single file system (this leaves /proc /sys and your scratch and donor volumes alone).
Copy your /donor volume data to / by running
rsync -aPx /donor/ / --exclude /boot --exclude /etc/grub.d ...
. You can include other excludes (use paths to where they would be copied on the final volume, not the path in the donor system. The excluded paths above are for an Ubuntu system. You should replace /etc/grub.d with the path or paths where your distro keeps its bootloader configuration files. I found that copying /boot was insufficient because the files in /boot are merely linked to /etc/grub.d.
Now you should be able to reboot your instance your new upgraded system. Do so, detach the donor EBS volume, and if you used the ephemeral storage as a scratch copy, reset it as you prefer. Switch your Elastic IP, or change your DNS configuration, test your applications, and then clean up your old instance artifacts. Congratulations, you're done.
Be careful of slashes. The rsync command treats /donor/ differently from /donor.
Converting the EBS snapshot to an AMI and setting the AMI virtualization type as HVM, then launching a new instance with this AMI actually failed to boot (I've had trouble with this with PV instances too with the Ubuntu base image unless I specified a specific kernel, so I'm not sure whether to blame HVM or the Ubuntu base images.
Connecting a copy of the PV ebs volume to a running HVM system and copying /boot to the donor, then replacing sda1 with the donor volume also failed to boot, though I think if I'd copied /etc/grub.d too it might have worked. This might not get you an SSD backed EBS volume though, if that's desirable.