QCOW 3 Ways

How to mount QCOW images as Linux block devices

tl;dr
  • guestmount (requires libguestfs-tools) sudo guestmount -d <vm-name> --ro -i <mountpoint>
  • qemu-nbd (requires the nbd driver)
    • Load the kernel module modprobe nbd max_part=8
    • Bind the device to the image qemu-nbd --connect=/dev/nbd0 <vmdiskimage.qcow>
    • Assuming partition #1 is the target mount /dev/ndb0p1 /a
  • loopback mount. Requires converting qcow to raw
    • Convert qcow to raw qemu-img convert vmdisk.qcow2 -f qcow2 -O raw vmdisk.raw
    • Create a loopback device losetup -f -P vmdisk.raw
    • Locate name of loopback device losetup -l | grep vmdisk.raw
    • Mount (assuming partition #1 on loopback device 99 mount /dev/mapper/loop99p1 /a
Continue reading

Create a Linux VM with KVM in 6 easy steps

Step 1. Install KVM

“KVM” Is shorthand for several technologies primarily KVM itself, QEMU and Livbirt. Like all tasks, life is considerably easier with the right tools. I suggest installing the following.

sudo apt install -y qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager cloud-image-utils libguestfs-tools

Though not strictly necessary – rebooting after installing the above might save you some headaches

sudo reboot

Step 2. Download a base image

Download a bootable image to create a VM clone from. I prefer ubuntu cloud images for this task. Choose your favorite flavor of a released build from the ubuntu cloud image released builds page. You will probably want the an amd64 build to download.

wget https://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-cloudimg-amd64.img

Step 3. Set a password for the new VM

The cloud-images come pre loaded with a user named ubuntu however there is no password set for that user so it is impossible to login. To overcome this, you will need to create a text file which contains your own password – convert that to a magical image file and then pass in that image file as part of the VM creation. Don’t worry it’s easier than it sounds.

cat >user-data.txt <<EOF
#cloud-config
password: secretpassword
chpasswd: { expire: False }
ssh_pwauth: True
EOF

Then create the image file. We will use the user-data.img file in the virt-install step.

cloud-localds user-data.img user-data.txt

Step 4. Create a writable clone of the boot drive

So far we have a bootable, but read-only image file of our chosen Linux OS and a custom override file that will set a password for the ubuntu user. Next we need to create a “disk” for our VM to boot from which is writable. We also probably want our root disk to be larger than 2GB. We can do both of those things using qemu-img. In the example below is the file ubuntu-vm-disk.qcow2 that will become our boot disk. It will be 20G in size.

qemu-img create -b ubuntu-18.04-server-cloudimg-amd64.img -F qcow2 -f qcow2 ubuntu-vm-disk.qcow2 20G

Step 5. Create a running VM

Now we need to turn that disk image into a running VM. To do that we use virt-install. As part of the virt-install command line we pass in the customization disk user-data.img which contains details of the customizations we want (namely to set a password). Other things like the VM name, memory and number of CPU are set here too. As part of this command the VM will boot and present a console. You can login from here.

virt-install --name ubuntu-vm \
  --virt-type kvm --memory 2048 --vcpus 2 \
  --boot hd,menu=on \
  --disk path=ubuntu-vm-disk.qcow2,device=disk \
  --disk path=user-data.img,format=raw \
  --graphics none \
  --os-type Linux --os-variant ubuntu18.04 

Step 6. Enjoy your virtual machine

You are now the proud owner of a virtual machine. As long as the VM is running you can connect to it using the command $ virsh-console ubuntu-vm. The username is ubuntu and the password is secretpassword unless you changed the text in user-data.txt

Example

gary@dellboy:~$ mkdir vmtmp

gary@dellboy:~$ cd vmtmp

gary@dellboy:~/vmtmp$ wget https://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-cloudimg-amd64.img
--2022-09-10 19:45:42--  https://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-cloudimg-amd64.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 185.125.190.37, 185.125.190.40, 2620:2d:4000:1::1a, ...
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|185.125.190.37|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 389349376 (371M) [application/octet-stream]
Saving to: ‘ubuntu-18.04-server-cloudimg-amd64.img’

ubuntu-18.04-server-cloudim 100%[========================================>] 371.31M  15.9MB/s    in 24s

2022-09-10 19:46:06 (15.7 MB/s) - ‘ubuntu-18.04-server-cloudimg-amd64.img’ saved [389349376/389349376]

gary@dellboy:~/vmtmp$ cat >user-data.txt <<EOF
#cloud-config
password: secretpassword
chpasswd: { expire: False }
ssh_pwauth: True
EOF

gary@dellboy:~/vmtmp$ cloud-localds user-data.img user-data.txt

gary@dellboy:~/vmtmp$ qemu-img create -b ubuntu-18.04-server-cloudimg-amd64.img -F qcow2 -f qcow2 ubuntu-vm-disk.qcow2 20G
Formatting 'ubuntu-vm-disk.qcow2', fmt=qcow2 size=21474836480 backing_file=ubuntu-18.04-server-cloudimg-amd64.img backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16

gary@dellboy:~/vmtmp$ virt-install --name ubuntu-vm \
  --virt-type kvm --memory 2048 --vcpus 2 \
  --boot hd,menu=on \
  --disk path=ubuntu-vm-disk.qcow2,device=disk \
  --disk path=user-data.img,format=raw \
  --graphics none \
  --os-type Linux --os-variant ubuntu18.04 

...VM boots....

[   13.327063] cloud-init[1160]: Cloud-init v. 22.2-0ubuntu1~18.04.3 running 'modules:final' at Sat, 10 Sep 2022 23:51:04 +0000. Up 13.18 seconds.
[   13.328572] cloud-init[1160]: Cloud-init v. 22.2-0ubuntu1~18.04.3 finished at Sat, 10 Sep 2022 23:51:05 +0000. Datasource DataSourceNoCloud [seed=/dev/vdb][dsmode=net].  Up 13.32 seconds
[  OK  ] Started Execute cloud user/final scripts.
[  OK  ] Reached target Cloud-init target.

Ubuntu 18.04.6 LTS ubuntu ttyS0

ubuntu login: ubuntu. <---- enter ubuntu as username
Password:  secretpassword <------ enter secretpassword as password
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-192-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Sep 10 23:51:55 UTC 2022

  System load:  0.45              Processes:             104
  Usage of /:   5.7% of 19.20GB   Users logged in:       0
  Memory usage: 6%                IP address for enp1s0: 192.168.122.188
  Swap usage:   0%

0 updates can be applied immediately.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@ubuntu:~$



Using cloud-init with AHV command line

TL;DR

  • Using cloud-init with AHV is conceptually identical to using KVM/QEMU- we need to use a few different tools with AHV
  • You will need a Linux image that is configured to use cloud-init. A good source is cloud-images.ubuntu.com
  • We will create a cloud-init textual file and create a mountable version using the cloud-localds tool on a Linux host
  • We will attach the cloud-init enabled ubuntu image and our cloud-init customization file to the VM at boot time
  • At boottime ubuntu will access the cloud-init data mounted as a CDROM and do the customization for us
Continue reading

Comparing RDS and Nutanix Cluster performance with HammerDB

tl;dr

In a recent experiment using Amazon RDS instance and a VM running in an on-prem Nutanix cluster, both using Skylake class processors with similar clock speeds and vCPU count. The SQLServer database on Nutanix delivered almost 2X the transaction rate as the same workload running on Amazon RDS.

It turns out that migrating an existing SQLServer VM to RDS using the same vCPU count as on-prem may yield only half the expected performance for CPU heavy database workloads. The root cause is how Amazon thinks about vCPU compared to on-prem.

Benchmark Results

HammerDB results from RDS and Nutanix
Continue reading

Single threaded DB performance on Nutanix HCI

tl;dr

A Nutanix cluster can persist a replicated write across two nodes in around 250 uSec which is critical for single-threaded DB write workloads. The performance compares very well with hosted cloud database instances using the same class of processor (db.r5.4xlarge in the figure below). The metrics below are for SQL insert transactions not the underlying IO.

Single threaded commit heavy insert rates. Latency as seen from SQL insert statement.
Continue reading

AHV Tip: Shutdown multiple VMs in parallel

Often in my lab I want to shutdown a large number of VMs quickly. In the example below I submit the power-off command for a maximum of 50 VMs in parallel. Be aware that we’re using the command line, and in line with true Unix philosophy the OS will assume we know what we are doing and obey us completely and immediately. In other words pasting the below commands to your CVM will immediately shutdown all powered on VMs.

 for i in $(acli  vm.list power_state=on | awk '{ print $(NF) }' |tail -50); do acli vm.off $i &  done

How to deploy Ubuntu cloud images to Nutanix AHV

In this example we use the KVM cloud image from the Canonical Ubuntu image repository. More information on Ubuntu cloud images is on the canonical cloud image page. More detail on the cloud image boot process and cloud-init here: Ubuntu UEC/Imanges.

We can use the Ubuntu cloud image catalog, and specifically use one that has been built to run on KVM. Since AHV is based on KVM/QEMU Nutanix can use that image format directly without any further conversion.

Using a cloud image can be a quicker way to stand up a particular version of Linux without having to go through the Linux installation process (choosing usernames, keyboard types, timezones etc.). However, you will need to pass in a public key so that you can login to the instance once it has booted.

Continue reading

Nutanix Performance for Database Workloads

We’ve come a long way, baby.

Full disclosure. I have worked for Nutanix in the performance engineering group since 2013. My opinions are likely biased, but that also gives me a decent amount of context when it comes to the performance of Nutanix storage over time. We already have a lot of customers running database workloads on Nutanix. But what about those high-performance databases still running on traditional storage?

I dug out a chart that I presented at .Next in 2017 and added to it the performance of a modern platform (AOS 6.0 and an NVME+SSD platform). For this random read microbenchmark performance has more than doubled. If you took a look at a HCI system even a few years back and decided that performance wasn’t where you needed it – there’s a good chance that the HW+SW systems shipping today could meet your needs.

Much more detail below.

Continue reading

Using rwmixread and rate_iops in fio

Creating a mixed read/write workload with fio can be a bit confusing. Assume we want to create a fixed rate workload of 100 IOPS split 70:30 between reads and writes.

Don’t mix rwmixread and rate_iops
TL;DR

Specify the rate directly with rate_iops=<read-rate>,<write-rate> do not try to use rwmixread with rate_iops. For the example above use.

rate_iops=70,30 

Additionally older versions of fio exhibit problems when using rate_poisson with rate_iops . fio version 3.7 that I was using did not exhibit the problem.

Continue reading

Cross rack network latency in AWS

I have VMs running on bare-metal instances. Each bare-metal instance is in a separate rack by design (for fault tolerance). The bandwidth is 25GbE however, the response time between the hosts is so high that I need multiple streams to consume that bandwidth.

Compared to my local on-prem lab I need many more streams to get the observed throughput close to the theoretical bandwidth of 25GbE

# iperf StreamsAWS ThroughputOn-Prem Throughput
14.8 Gbit21.4 Gbit
29 Gbit22 Gbit
418 Gbit22.5
823 Gbit23 Gbit
Difference in throughput for a 25GbE network on-premises Vs AWS cloud (inter-rack)