Access guest drives in XenEnterprise
I’ve been meaning to put a few notes together of some work on Ubuntu XGT templates for XenEnterprise I did in June, but XE v4 supposely has a new system so I’ll have to check that out first. I did note this blog entry a few days ago and decided to put a tip bits of my own up.
The OSS version Xen is more tightly integrated with the host operating system and disk device node directly from the host to Linux guests. So /dev/sdb1 in the host could be /dev/sda1 in the guest. XenEnterprise acts more like Vmware and passes LVM blocks though to the guests as disks. Just like Vmware this makes it a much trickier in XE to resize or act directly on a mount file system from the host.
If the guest is shutdown the best method for accessing the filesystem is to use kpartx or lomount combined with losetup.
First find the details:
[root@node1 ~]# xe host-vm-list | grep feisty -A2 NAME: feisty_template uuid: 9ff84c09-6802-4035-9db5-7c694f256988 state: DOWN [root@node1 ~]# sm info | grep 9ff84c09-6802-4035-9db5-7c694f256988 -A3 ------> VDI ID: [9ff84c09-6802-4035-9db5-7c694f256988.hda] Name: [NULL] Descr: [DESCR] Device: [/dev/VG_XenStorage-6fce01fb-8844-49a4-9b80-bf36ebee6109/LV-9ff84c09-6802-4035-9db5-7c694f256988.hda] Shareable: [0] Virtsize: [10240 MB] Parent UUID: [6fce01fb-8844-49] -- ------> VDI ID: [9ff84c09-6802-4035-9db5-7c694f256988.hdb] Name: [NULL] Descr: [DESCR] Device: [/dev/VG_XenStorage-6fce01fb-8844-49a4-9b80-bf36ebee6109/LV-9ff84c09-6802-4035-9db5-7c694f256988.hdb] Shareable: [0] Virtsize: [512 MB] Parent UUID: [6fce01fb-8844-49] [root@node1 ~]# xe vm-disk-list vm-name=feisty_template name: hda size: 10240 min_size: 0 function: root qos-value: (null) name: hdb size: 512 min_size: 1 function: USER qos-value: (null)
Then access the drive directly, by either losetup:
[root@node1 ~]# losetup /dev/loop0 /dev/VG_XenStorage-6fce01fb-8844-49a4-9b80-bf36ebee6109/LV-9ff84c09-6802-4035-9db5-7c694f256988.hda [root@node1 ~]# fdisk -l /dev/loop0 Disk /dev/loop0: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/loop0p1 * 1 1305 10482381 83 Linux [root@node1 ~]# kpartx -v -a /dev/loop0 add map loop0p1 : 0 20964762 linear /dev/loop0 63 [root@node1 ~]# ls -l /dev/mapper/ | grep loop brw-rw---- 1 root disk 253, 15 Aug 8 11:30 loop0p1 [root@node1 ~]# mount -t xfs /dev/mapper/loop0p1 /mnt [root@node1 ~]# ls -l /mnt/ total 100 drwxr-xr-x 2 root root 4096 May 30 02:16 bin ...
or lomount:
[root@node1 ~]# umount /mnt/ [root@node1 ~]# ls -l /dev/mapper/ | grep loop brw-rw---- 1 root disk 253, 15 Aug 8 11:30 loop0p1 [root@node1 ~]# kpartx -v -d /dev/loop0 del devmap : loop0p1 [root@node1 ~]# ls -l /dev/mapper/ | grep loop [root@node1 ~]# lomount -verbose -diskimage /dev/VG_XenStorage-6fce01fb-8844-49a4-9b80-bf36ebee6109/LV-9ff84c09-6802-4035-9db5-7c694f256988.hda -partition 1 -t xfs /mnt mount -oloop,offset=32256 /dev/VG_XenStorage-6fce01fb-8844-49a4-9b80-bf36ebee6109/LV-9ff84c09-6802-4035-9db5-7c694f256988.hda -t xfs /mnt [root@node1 ~]# ls -l /dev/mapper/ | grep loop [root@node1 ~]# ls -l /mnt/ total 100 drwxr-xr-x 2 root root 4096 May 30 02:16 bin
kpartx is probably more useful if you wish to resize or act on the disk device nodes directly. Whereas, lomount is quicker if you want to act on the filesystem; say for a bootstrap install.
Finally to tidy up:
[root@node1 ~]# umount /mnt/ [root@node1 ~]# losetup -d /dev/loop0
I pickup the details above from the fedora wiki.
With XEv4 and the option of VHD based disk file images the above will still be useful.