2012年3月8日 星期四

Migrating AWS CentOS Linux images to KVM environment.

For some reason, I would like to port the AWS EC2 Linux images from AWS environment to inhouse KVM environment. I choose to

So this time I am choosing a EBS-based CentOS based template, centos-x64-6.0-core (ami-03559b6a). I think below mechanism would work the same on instance-store template as well.

I quickly launched an instance from ami-03559b6a and then logging into the instances to have a glance.



[root@ip-10-195-81-211 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            6.0G 1010M  4.7G  18% /
tmpfs                 296M     0  296M   0% /dev/shm

[root@ip-10-195-81-211 ~]# fdisk -l

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux


It is an instance running on xenserver, the root disk is 6G. In order to clone the root disk, I created an EBS volume with size sightly larger than 6G. In this example I use add a 10G EBS volume.



[root@ip-10-195-81-211 ~]# fdisk -l

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux

Disk /dev/xvdj: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdj doesn't contain a valid partition table


So now /dev/xvde is the root disk, /dev/xvde1 is the partition in-use while /dev/xvdj is the disk attached to this instance and there is no partiton being created here.

We will now create a file system on /dev/xvdj and then create a raw disk image on top of that.


[root@ip-10-195-81-211 ~]# mkfs.ext4 /dev/xvdj
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@ip-10-195-81-211 ~]# mkdir /target
[root@ip-10-195-81-211 ~]# mount /dev/xvdj /target


It will take some time to run dd though, depends on the disk image size. In this example we are going to create a disk image with 6442MB in size (same as the size of /dev/xvde showed in fdisk).

[root@ip-10-195-81-211 ~]# time dd if=/dev/zero of=/target/diskimage.raw bs=1M count=6442
6442+0 records in
6442+0 records out
6754926592 bytes (6.8 GB) copied, 178.33 s, 37.9 MB/s

real    2m58.363s
user    0m0.010s
sys    0m9.586s
[root@ip-10-195-81-211 ~]# ls -la /target/diskimage.raw
-rw-r--r-- 1 root root 6754926592 Apr 12 02:33 /target/diskimage.raw





And then using losetup to setup the disk image as loopback disk /dev/loop0.

[root@ip-10-195-81-211 ~]# losetup -fv /target/diskimage.raw
Loop device is /dev/loop0


Use diskdump to clone /dev/xvde to /dev/loop0, it will take some time as well.

[root@ip-10-195-81-211 ~]# time dd if=/dev/xvde of=/dev/loop0
12582912+0 records in
12582912+0 records out
6442450944 bytes (6.4 GB) copied, 705.782 s, 9.1 MB/s

real    11m45.844s
user    0m9.865s
sys    1m24.088s


Once the diskdump is done, we mount up the disk image, chroot to it and replace the kernel on it. One trick here is that, we will need to check the partition offset so that we will start from correct disk location. To check the existing offset, we will need to check back the fdisk output of /dev/xvde

[root@ip-10-195-81-211 ~]# fdisk -l /dev/xvde

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux

So from above screenshot, we know that the partition is starting from first track (1) and each track size is 63*512. So our offset here is 1*63*512. We mount the partition to /clone.

[root@ip-10-195-81-211 ~]# mkdir /clone
[root@ip-10-195-81-211 ~]# mount -o loop,offset=$((63*512)) /target/diskimage.raw /clone


Compare / and /clone, they should be the same here. (it will be sightly different actually, as there are some run time file on /)

[root@ip-10-195-81-211 ~]# df -h / /clone
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            6.0G 1010M  4.7G  18% /
/dev/loop1            6.0G 1001M  4.7G  18% /clone


As the original kernel of the VM instance is based on Xen paravirtualized kernel, we will have to install the generic system kernel or otherwise it wont be able to boot it our KVM environment. So, we chroot to the new virtual partition /clone and install the original kernel.

[root@ip-10-195-81-211 /]# yum -y install kernel kernel-devel


Once the kernel is installed. We may need to edit /boot/grub/grub.conf so that it will boot the correct kernel. In this example, I added below red line to define the new kernel (you will have to check the path of the new kernel though.

[root@ip-10-195-81-211 /]# cat /boot/grub/menu.lst
#===========
default=0
timeout=2
title CentOS (6.0 - /boot/vmlinuz-2.6.32-220.7.1.el6.x86_64)
    root (hd0,0)
    kernel /boot/vmlinuz-2.6.32-220.7.1.el6.x86_64 ro root=/dev/vda1
    initrd /boot/initramfs-2.6.32-220.7.1.el6.x86_64.img
title CentOS (6.0 - vmlinuz-2.6.32-131.17.1.el6.x86_64)
    root (hd0,0)
    kernel /boot/vmlinuz-2.6.32-131.17.1.el6.x86_64 ro root=/dev/xvde1
    initrd /boot/initramfs-2.6.32-131.17.1.el6.x86_64.img
#===========


Please be notice that the original drive on above was /dev/xvde1 while the new drive will be /dev/vda1 (if you are going to make use of paravirtualized driver on KVM hypervisor) or /dev/sda1 (if you are going to run it with generic driver).

Also we will need to replace /dev/xvde to /dev/vda on /etc/fstab

[root@ip-10-195-81-211 /]# grep "/dev/xvde" /etc/fstab
/dev/xvde1    /        ext4    defaults    1 1
[root@ip-10-195-81-211 /]# sed -i s%/dev/xvde%/dev/vda% /etc/fstab
[root@ip-10-195-81-211 /]# grep "/dev/vda" /etc/fstab
/dev/vda1    /        ext4    defaults    1 1


Once above preparation are done, we could quit from the chroot environment, unmount the virtual partition and detach the loopback device.

[root@ip-10-195-81-211 ~]# umount /clone
[root@ip-10-195-81-211 ~]# losetup -d /dev/loop0 


To load the diskimage on our standalone KVM server, we will have to scp / sftp the disk image from our EC2 VM to our on-premise server. As the image itself is a bit large, we may want to compress it with gzip or whatever compression tools before sending over.


Due to the fact that the image is a raw image, we will need to start the VM by defining raw disk option. The following screen-shots basically show what we need to do to facilitate this via virt-manager on kvm hypervisor node.

1. Launch a new node with existing disk image



2. Pointing to the path of the raw image. One thing have to be careful here is that, we will have to choose the proper OS Type (Linux) and Version (RHEL/CentOS 6) or otherwise it wont be able to present the device as para-virtualized device.


3. Define the memory and CPU core assigned to this VM, nothing special though.

4.  Proper Architecture have to be defined (x86_64 or i686)


Once the VM configured, we could start the VM however it will basically wont boot due to the GRUB failure. We will need to boot it via rescue CD / CentOS CD and issue below command to mount the volume and install grub.

# mkdir /vm; mount /dev/vda1 /vm
# mount –o bind /dev /vm/dev
# mount –o bind /sys /vm/sys
# chroot /vm

And then run grub-install and associated configuration.

# grub-install /dev/vda
# grub
grub> device (hd0) /dev/vda
device (hd0) /dev/vda
grub> root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
grub> setup (hd0)
 setup (hd0)
  Checking if "/boot/grub/stage1" exists... yes
  Checking if "/boot/grub/stage2" exists... yes
  Checking if "/boot/grub/e2fs_stage1_5" exists... yes
  Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 26 sectors are embedded. successed.
  Running "install /boot/grub/stage1 (hd0) (hd0)1+26 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf "... succeeded
Done.

grub> quit
#

After above task, basically the VM is ready for reboot and it should be able to boot. However, before rebooting, you may also want to reset the root password so that you could get into the VM once it is rebooted.

1 則留言: