Understanding QEMU devices

Not sure where I came across this, but it is an excellent description of QEMU (and virtualization in general).  I am very much a fan of this style of technical communication as exemplified in this final summary paragraph (the full article is longer):

In summary, even though QEMU was first written as a way of emulating hardware memory maps in order to virtualize a guest OS, it turns out that the fastest virtualization also depends on virtual hardware: a memory map of registers with particular documented side effects that has no bare-metal counterpart. And at the end of the day, all virtualization really means is running a particular set of assembly instructions (the guest OS) to manipulate locations within a giant memory map for causing a particular set of side effects, where QEMU is just a user-space application providing a memory map and mimicking the same side effects you would get when executing those guest instructions on the appropriate bare metal hardware.

https://www.qemu.org/2018/02/09/understanding-qemu-devices/

Effects of CPU topology on sqlserver guests with AHV.

VM CPU Topology

The topology (layout) that AHV presents virtual Sockets/CPU to the guest operating system will usually be different than the physical topology. This is expected because we typically present a subset of all cores to the guest VMs.

Usually it is the total number of vCPU given to the VM that matters, not the specific topology, but in the case of SQLserver running an analytical workload (a TPC-H like workload from HammerDB) the topology passed to the VM does make a difference. Between 10% and 20% when measured by the total runtime.

[I think that the reason we see a difference here is that (a) the analytical workloads use hardly any storage bandwidth (I sized the database to fit in memory) and (b) there is probably a lot of cross-talk between the cores/memory as the DB engine issues parallel queries.]

At any rate we see that passing 20 cores as “20 sockets of 1 core” beats the performance of “1 socket with 20 cores” by a wide margin. The physical topology is two sockets of 20 cores on each socket. Thankfully the better performing option is the default.

CPU Topology may make a difference for SQL server running analytical workloads.
Continue reading

QCOW 3 Ways

How to mount QCOW images as Linux block devices

tl;dr
  • guestmount (requires libguestfs-tools) sudo guestmount -d <vm-name> --ro -i <mountpoint>
  • qemu-nbd (requires the nbd driver)
    • Load the kernel module modprobe nbd max_part=8
    • Bind the device to the image qemu-nbd --connect=/dev/nbd0 <vmdiskimage.qcow>
    • Assuming partition #1 is the target mount /dev/ndb0p1 /a
  • loopback mount. Requires converting qcow to raw
    • Convert qcow to raw qemu-img convert vmdisk.qcow2 -f qcow2 -O raw vmdisk.raw
    • Create a loopback device losetup -f -P vmdisk.raw
    • Locate name of loopback device losetup -l | grep vmdisk.raw
    • Mount (assuming partition #1 on loopback device 99 mount /dev/mapper/loop99p1 /a
Continue reading

Duplicate IP issues with Linux and virtual machine cloning.

TL;DR – Some modern Linux distributions use a newer method of identification which, when combined with DHCP can result in duplicate IP addresses when cloning VMs, even when the VMs have unique MAC addresses.

To resolve, do the following ( remove file, run the systemd-machine-id-setup command, reboot):

# rm /etc/machine-id
# systemd-machine-id-setup
# reboot

When hypervisor management tools make clones of virtual machines, the tools usually make sure to create a unique MAC address for every clone. Combined with DHCP, this is normally enough to boot the clones and have them receive a unique IP. Recently, when I cloned several Bitnami guest VMs which are based on Debian, I started to get duplicate IP addresses on the clones. The issue can be resolved manually by following the above procedure.

To create a VM template to clone from which will generate a new machine-id for very clone, simply create an empty /etc/machine-id file (do not rm the file, otherwise the machine-id will not be generated)

# echo "" |  tee /etc/machine-id 

The machine-id man page is a well written explanation of the implementation and motivation.