Ubuntu 9.04 Desktop
Just did an install of the 9.04 desktop. Very clean. Intend to do some testing with kvm, virtual box and win4lin.
Permalink Comments off
Just did an install of the 9.04 desktop. Very clean. Intend to do some testing with kvm, virtual box and win4lin.
Permalink Comments off
If you install a vm with vmbuilder without virtio the swap section of /etc/fstab will use /dev/sda2 instead of /dev/vda2. You can use blkid
nic@vm-base:~$ sudo blkid /dev/vda1: UUID="bac299c4-c545-46ca-aed3-26da4a56f6d7" TYPE="ext3" /dev/vda2: TYPE="swap" UUID="0c75b2dd-6c6f-4729-b041-0d95475dc171" /dev/vdb: UUID="jIkLcQ-zXUo-KIWR-zvmm-cpKP-9PpT-eE9RY3" TYPE="lvm2pv"
nic@vm-base:~$ cat /etc/fstab # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/sda1 / ext3 defaults 0 0 UUID="0c75b2dd-6c6f-4729-b041-0d95475dc171" swap swap defaults 0 0
in order to get swap working with different driver types.
Permalink Comments off
Recent issues and 18 months of experience have shown me that shared storage for a small (2-4 node) virtualisation cluster is hard.
It is still practical and possible to build a shared storage cluster using Solaris, rather than buying super experience appliance storage. But you must always spec at least two storage nodes, otherwise you lock yourself into a structure that can be difficult to change. That includes changes that involve upgrades.
Virtualisation technology has moved a long way in the last couple years. And while I’m still formulating a new approach, there are many more options for flexible micro virtualisation clusters.
Permalink Comments off
I’ve got a couple XenServer vms that I never migrated to ESX. Time and complexity got in the way. In fact one of the reasons why I dislike XenServer is that fact that way it ran disk images was not portable.  Where as with ESX and KVM I can migrate disk images between the two hypervisors without needing to chance anything in the guest. With XenServer at the time this was not easy. XenOSS has a similar issue with PV domains, although with KVM+xenner  is meant to be able to run these.
Anyway here are a few links for converting disk images between formats.
I found though in the end that kvm-img or qemu-img has able to handle all the images I use: VHD, VMDK, RAW, and QCOW2. For example kvm-image convert disk.vhd -O raw disk.raw will work.
This leads to the nicest thing I found about KVM. With either ESXi 3.5 or vSphere 4 ubuntu 8.04 or 8.10 VMDK files I was able to: kvm-img convert -O raw disk.vmdk disk.raw. Then run this new disk in raw format with KVM plus virtio drivers and do so without any changes in the guest.
This truely is disposable computing!
Permalink Comments off
I’ve started running KVM recently and I’ll post a review at some point. I’m finding it very flexible and much much easier to use that Xen.
There are still a few questions regarding file caches and disk images. In general I’m happy that it’s ready for production.
Permalink Comments off