Measuring CPU performance with X-Ray and pgbench.

Nutanix X-Ray is well known for being able to model IO/Storage workloads, but what about workloads that are CPU bound?

X-Ray can run Ansible scripts on the X-Ray worker VMs, and by doing so we are able to provision almost any application. For our purposes we are going to use Postgres DB and the built-in benchmarking tool PGbench. I have deliberately created a very small DB which fits into the VM memory and does almost no IO.

The X-ray workload files can be found here.

X-Ray interface to pgbench

Using standard X-Ray YAML we are able to pass custom parameters such as how many Postgres VMs to deploy, how many clients and how many threads for pgbench to run.

When X-Ray runs the workload the results are displayed in the X-Ray UI. This time though the metric is Database transactions per second not IOPS or Storage throughput.

PGbench Transactions per second are displayed in the X-ray UI.

By running a variety of experiments, altering the number of VMs running the workload, I was able to plot the aggregate transactions/s and the per-VM value, which as expected decreases once the host platform CPU is saturated.

I was able to use the CPU bound nature of this particular workload to take a look at scheduling and CPU usage characteristics of different hypervisors and CPU types. I found that one combination gave better performance at low loads, but the other combination generated better performance under higher loads.

Per VM transaction count drops sharply once the number of VMs exceeds the CPU capacity of the host.

These sorts of experiments are quite straight-forward using X-Ray and Ansible. A special shout-out goes to GV who created the custom exporter to send the postgres transaction/s results back to X-ray in realtime.

View from Nutanix storage during Postgres DB benchmark

Following on from the previous [1] [2] experiments with Postgres & pgbench. A quick look at how the workload is seen from the Nutanix CVM.

The Linux VM running postgres has two virtual disks:

  • One is taking transaction log writes.
  • The other is doing reads and writes from the main datafiles.

Since the database size is small (50% the size of the Linux RAM) – the data is mostly cached inside the guest, and so most reads do not hit storage. As a result we only see writes going to the DB files.

Additionally, we see that database datafile writes the arrive in a bursty fashion, and that these write bursts are more intense (~10x) than the log file writes.

Charts from Prometheus/Grafana showing IO rates seen from the perspective of the Linux guest VM

Despite the database flushes ocurring in bursts with a decent amount of concurrency the Nutanix CVM provides an average of 1.5ms write response time.

From the Nutanix CVM port 2009 handler, we can access the individual vdisk statistics. In this particular case vDisk 45269 is the data file disk, and 40043 is the database transaction log disk.

Datafile writes completed in 1.5millisecond average – despite deep queues during burst

The vdisk categorizer correctly identifies the database datafile write pattern as highly random.

Writes to the datbase datafiles are almost entirely random

As a result, the writes are passed into the replicated oplog

The burst of writes hits the oplog as expected

Meanwhile the log writes are categorized as mostly sequential, which is expected for a database log file workload.

Meanwhile, log file writes are mostly categorized as sequential.

Even though the log writes are sequential, they are low-concurrency and small size (looks like mostly 16K-32K). This write pattern is also a good candidate for oplog.

These low-concurrency log writes also hit oplog

Benchmarking with Postgres PT2

In this example we run pgbench with a scale factor of 1000 which equates to a database size of around 15GB. The linux VM has 32G RAM, so we don’t expect to see many reads.

Using prometheus with the Linux node exporter we can see the disk IO pattern from pgbench. As expected the write pattern to the log disk (sda) is quite constant, while the write pattern to the database files (sdb) is bursty.

pgbench with DB size 50% of Linux buffer cache.

I had to tune the parameter checkpoint_completion_target from 0.5 to 0.9 otherwise the SCSI stack became overwhelmed during checkpoints, and caused log-writes to stall.

default pgbench – notice the sharp drop in log-writes before tuning.