Getting to Partitions on QCOW2 Images
This condensed recipe applies to qcow2 disk images. The image partitions can be mounted on the host for read/write access using QEMU’s Network Block Device (NBD) server.The VM should be down for this procedure.
First, check to see that you have available /dev/nbd devices. Depending on how many you need, you can create more:
modprobe nbd max_part=16
Export the image and look for partitions. Use the full path name to the image.
You may need to probe for partitions but I haven't had to.
The qemu-nbd command will create an nbd device for each partition. For example, /dev/nbd0p1. You can try to mount one of these partitions now but, if the disk image belongs to a Linux guest, it may fail. Mount can't work directly with LVM partitions.
mount: unknown filesystem type 'LVM2_member'
See the section "Mounting Logical Volumes" below for that procedure.
Once you've finished with the image, reverse the order of the steps above. I don't know if it's actually necessary to deactivate the volume group, but you definitely should "disconnect" the QEMU block device after unmounting the partition:
Getting to Partitions on Raw Images
Raw disk images are just that: a binary disk image in a file. You can see the partitions with fdisk:
# fdisk -l /path/to/disk.img Disk disk.img: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00068f41 Device Boot Start End Blocks Id System disk.img1 * 2048 1026047 512000 83 Linux disk.img2 1026048 20971519 9972736 8e Linux LVM
Multiply the number under the Start column by 512 and you'll get the byte offset of the partition inside the file. You can use that number to mount a non-LVM partition.
To mount the first partition on a directory called /mnt/p1 in the example above:
mount -o loop,offset=1048576 disk.img /mnt/p1
The second partition is a logical volume, so you can't directly mount it. Instead, map that partition to a loopback device.
# losetup -f /dev/loop1 # losetup /dev/loop1 disk.img -o525336576
Then you can apply the LVM procedures in the next section to get at the volumes.
Mounting Logical Volumes
To mount LVM partitions, the volume group(s) first need to be activated. There's some good info at David Hilley's blog, which I’ve simplified a bit here.First, scan the system for LVM block devices, also known as physical volumes. You'll see the PV's of the running system as well as the new NBD volumes.
# lvm pvscanPV /dev/nbd0p2 VG VolGroup00 lvm2 [7.69 GiB / 0 free]
PV /dev/sda2 VG vg_scott-kvm lvm2 [135.47 GiB / 0 free]
PV /dev/sdb1 VG vg_scott-kvm lvm2 [951.78 GiB / 0 free]
For this example, VolGroup00 is the guy we want to get to. A volume group is just that: a collection of logical volumes. The first step in getting to the LV's inside is to activate the volume group.
Scan the system for logical volumes. (Note that the lvm scan/list commands are picking up the LV devices on the host as well as the QEMU guest image.)
LV VG Attr LSize [...]
LvLogs VolGroup00 -wi-a--- 3.66g
LvRoot VolGroup00 -wi-a--- 3.59g
LvSwap VolGroup00 -wi-a--- 448.00m
lv_home vg_scott-kvm -wi-ao-- 48.84g
lv_root vg_scott-kvm -wi-ao-- 50.00g
lv_swap vg_scott-kvm -wi-ao-- 49.09g
lv_var vg_scott-kvm -wi-ao-- 939.31g
LvRoot VolGroup00 -wi-a--- 3.59g
LvSwap VolGroup00 -wi-a--- 448.00m
lv_home vg_scott-kvm -wi-ao-- 48.84g
lv_root vg_scott-kvm -wi-ao-- 50.00g
lv_swap vg_scott-kvm -wi-ao-- 49.09g
lv_var vg_scott-kvm -wi-ao-- 939.31g
The new logical volumes are now available as mountable filesystem devices. Take a look in /dev/mapper. The names of the device files there should be in the format VG-LV, where VG is the volume group name and LV is the name of the logical volume.
Once you are finished examining or modifying your logical volume, be sure to unmount it and deactivate:
# umount /mnt/some-root # vgchange -an VolGroup00 # losetup -d /dev/loop1
So, it's pretty simple right? There may be some nuances that I'm not aware of, so chime in if your experience is different.
Here's a hacky bash module that provides two functions: the first is for attaching the logical volume to the loop device, and the second is for mounting the root volume. It's hacky because I cheated with global vars and replaced the actual cleanup code in my module with a comment. Those things should be fixed before adding capabilities to this code.
#!/bin/bash cleanup() { echo "Cleaning up..." # unmount, deactivate volume group, detach loop device } # usage: setupRootPart /path/to/image # return: loop device via global var loopdev setupRootPart() { declare -g loopdev imagefile=$1 offset_blocks=$(fdisk -l $imagefile | grep 8e.*LVM | awk '{print $2}') let offset=$offset_blocks*512 loopdev=$(losetup -f) echo "Attaching $imagefile to $loopdev at offset $offset" if ! losetup $loopdev $imagefile -o $offset; then echo "Failed to set up loop device" cleanup exit 2 fi } # usage: mountRootVol /path/to/mount/dir # return: volume name via global var vol mountRootVol() { declare -g vol # global var to return volume name to the caller mountpoint=$1 vol=$(lvm pvscan | grep $loopdev | awk '{print $4 }') vgchange -ay $vol [[ -d $mountpoint ]] || mkdir -p $mountpoint # TODO need smarter way to find voldev voldev=/dev/mapper/$vol-root [[ -L $voldev ]] || { echo "No $voldev" echo "Mapped vols:" ls -l /dev/mapper cleanup exit 3 } echo "Mounting $voldev on $mountpoint" mount $voldev $mountpoint }
Cheers!
Hello, when i execute lvscan, i cannot see nbd partition.
ReplyDeleteDisk /dev/nbd0: 171.8 GB, 171798691840 bytes, 335544320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0003e189
Device Boot Start End Blocks Id System
/dev/nbd0p1 * 2048 1026047 512000 83 Linux
/dev/nbd0p2 1026048 335544047 167259000 8e Linux LVM
[root@rdo7computenode1 ~]# lvm pvscan
PV /dev/sda3 VG centos lvm2 [<2.18 TiB / 0 free]
Total: 1 [<2.18 TiB] / in use: 1 [<2.18 TiB] / in no VG: 0 [0 ]
This image is from my openstack instance, qcow2 format. Any idea for it?
Thanks,
Eguene