Migrate VMWare ESXi Virtual Machines to Proxmox KVM with LVM-Thin Logical Volumes

Recently we decided to move away from VMWare ESXi because we want to scale out but don't want to buy expensive licenses just for virtualization. We evaluated different solutions and got stuck with Proxmox and it's KVM virtualization. We have always used Debian-based VMs so KVM was a logical decision. Also Proxmox allows us to use containers and cluster-based migrations if we want to.

A problem we ran into was, that we had to migrate some ESXi VMs with their vmdk file disks to the new LVM-Thin-based storage method introduced in Proxmox. I really like the idea, that each virtual machine has it's own logical volume on the bare metal disk which is much easier to maintain with the hosts LVM.

The problem however was, how do we migrate vmdk disk files to a logical volume?

I'll show you the solution here as well as a wrinkle we stumbled on.

Copying the vmdk disk files to Proxmox

At first we assume that you have a ESXi Host running at 10.10.10.2 and a Proxmox Host (i assume v4.x with Debian Jessie) on 10.10.10.3.

Because you need root right for the later dd command, i assume you're root all the time. If not, you have to use sudo if necessary.

We could now create a rsa-keypair for ssh but as we already had separate users on the ESXi machine, it was easier to just use password authentication for pulling the vmdk files over to proxmox.

You have to stop the VM on ESXi first otherwise you have inconsistencies in the vmdk disk file but ESXi doesn't allow scp'ing the file over with an error Device or resource busy. After the VM has been shutdown or stopped, pull the vmdk file over. I also assume that you have flat disks so they're not thin provisioned. This would result in multiple vmdk files you first have to merge together.

~ # cd /tmp
/tmp # scp [email protected]:/vmfs/volumes/datastore1/<yourVMname>/<yourVMname>-flat.vmdk <yourVMname>-flat.vmdk

Replace <yourVMname> with the actual name of the VM. The actual name of the datastore datastore1 can be different on your system.

The vmdk file will now be copied over from the ESXi to the tmp folder of the Proxmox server. Please keep in mind, that you need enough free space in /tmp before copying over the VM. Choose another folder if it doesn't fit in tmp.

Migrate VMDK to RAW disk files

The next step is really straightforward. Since we want to have the disk contents and all of it's partitions in the logical volume later, we have to migrate the vmdk file to a RAW disk image first. This can be done directly on the Proxmox host using qemu-img:

/tmp # qemu-img convert <yourVMname>-flat.vmdk -O raw <yourVMname>.raw

Please keep in mind that you now need the VM's disk size twice for the second image!

Recreate the VM on Proxmox

The next step is to recreate the ESXi VM on the Promox host again. So just create one with the same amount of RAM, CPU and disk size. The latter is rather important as it should be at least the same size as the old VM, for safety reasons maybe a bit larger.

Please remember the VM id from the new Proxmox VM. It should be something like 101 or 112 depending on how much VMs you've already created.

Back on the commandline check the exact logical volume name in the LVM where we want to copy the VMs raw disk into.

~ # lvdisplay
--- Logical volume ---
  LV Path                /dev/pve/vm-117-disk-1
  LV Name                vm-117-disk-1
  VG Name                pve
  LV UUID                n17mvl-A7MC-KuaY-I8As-Xlzc-R5kc-PlXU33
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-03-12 16:21:02 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                6.00 GiB
  Mapped size            0.0%
  Current LE             1536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:25

In our case the new VM has the id 117.

Copying the raw disk to the LVM Logical Volume

Since dd doesn't provide a progress bar itself, i've installed pv and gave it the size of the image file, so we have a progress bar while copying the data. Run the following command in the folder where your raw disk image lies. In our case it's /tmp

/tmp # dd if=<yourVMname>.raw | pv -s 6G | dd of=/dev/pve/vm-117-disk-1

Afterwards you can check the fillup of the logical volume again. Please check the "Mapped size" value:

~ # lvdisplay
--- Logical volume ---
  LV Path                /dev/pve/vm-117-disk-1
  LV Name                vm-117-disk-1
  VG Name                pve
  LV UUID                n17mvl-A7MC-KuaY-I8As-Xlzc-R5kc-PlXU33
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-03-12 16:21:02 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                6.00 GiB
  Mapped size            83.33%
  Current LE             1536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:25

So you can see that the raw disk image filled up the logical volume by about 83%. You could recreate the VM again with a smaller logical volume size if you have scarce disk resources.

After successful copying the image just start the new VM!

Additional Informations about disk controllers and other hardware

In our case we successfully migrated Debian Squeeze and Jessie machines from ESXi to Proxmox and from LSI SAS disk controllers to virtio. We also moved from E1000 or VMXNET3 network controllers to virtio which didn't made any problems at all.

For the disk controllers we had a little wrinkle on Jessie machines. When you move Squeeze machines, the virtio driver is automatically loaded on boot time and the new disk is recognized immediately and root can be booted without a hitch. But on Jessie machines this is not the case.

I'am not really sure about the initial problem here but i think that the initramfs disk generated by Jessie only contains the modules/drivers it needs for the actual machine it's running on. The solution to successfully boot the machine on a virtio-disk was to add the modules and recreate the initramfs with the forced load of additional modules. The following commands have to be run on the (to-be-migrated) VM on ESXi:

~ # vim /etc/initramfs-tools/modules
# Add all modules we want to have available in the initial ramdisk
virtio
virtio-blk # This is the virtio block-device driver
virtio-ring
virtio-pci
virtio-net
~ # update-initramfs -u -v

The latter command updates the initramfs disk with verbose output so you are able to check if the virtio modules have been loaded into the new ramdisk.
I assume that loading the virtio-blk module should be enough to fix the problem but i haven't tested it yet as we needed the other modules nevertheless.

After the migration and successful boot of the old VM on the new Proxmox host you can safely delete the copied vmdk file and it's converted raw disk cousin as well as the whole VM on the ESXi host.

Conclusion

Initially i thought this can be a tough problem but after some googling and tryout the solution was rather simple and logical. The mentioned virtio-blk module problem was harder and took me a couple of hours to find out. I hope i'm able to help other people with the migration as this should be quite similar with other source virtualizations like Virtualbox as well.

If you have questions or need more informations about thin-provisioned disks or something, let me know in the comments!