Measuring CPU performance with X-Ray and pgbench.

Nutanix X-Ray is well known for being able to model IO/Storage workloads, but what about workloads that are CPU bound?

X-Ray can run Ansible scripts on the X-Ray worker VMs, and by doing so we are able to provision almost any application. For our purposes we are going to use Postgres DB and the built-in benchmarking tool PGbench. I have deliberately created a very small DB which fits into the VM memory and does almost no IO.

The X-ray workload files can be found here.

X-Ray interface to pgbench

Using standard X-Ray YAML we are able to pass custom parameters such as how many Postgres VMs to deploy, how many clients and how many threads for pgbench to run.

When X-Ray runs the workload the results are displayed in the X-Ray UI. This time though the metric is Database transactions per second not IOPS or Storage throughput.

PGbench Transactions per second are displayed in the X-ray UI.

By running a variety of experiments, altering the number of VMs running the workload, I was able to plot the aggregate transactions/s and the per-VM value, which as expected decreases once the host platform CPU is saturated.

I was able to use the CPU bound nature of this particular workload to take a look at scheduling and CPU usage characteristics of different hypervisors and CPU types. I found that one combination gave better performance at low loads, but the other combination generated better performance under higher loads.

Per VM transaction count drops sharply once the number of VMs exceeds the CPU capacity of the host.

These sorts of experiments are quite straight-forward using X-Ray and Ansible. A special shout-out goes to GV who created the custom exporter to send the postgres transaction/s results back to X-ray in realtime.

Impact of Data locality on DB workloads.

Effect of removing CPU constraints and maintaining data locality on a running DB instance.

In this video I migrate a Postgres DB running PGbench benchmark. The DB is running on a Host which is CPU constrained. Once the VM is migrated to a less busy host the transaction rate immediately increases from ~15,000 to ~20,000. As the DB continues to run on the new host – the Nutanix storage detects the access patterns and “localizes” the data that the DB is accessing. Over the subsequent minutes the transaction rate increases to ~30,000 TPS.

The variation in the transaction rate is due to the benchmark itself, the transaction rate is not expected to be uniform. Many different queries are executing in parallel, some hitting RAM cache, some hitting storage.

N.B The Postgres DB is totally un-tuned and contains purely default settings. The aim of the experiment was to see how data-locality might affect a running database workload, not to generate the maximum TPS.

View from Nutanix storage during Postgres DB benchmark

Following on from the previous [1] [2] experiments with Postgres & pgbench. A quick look at how the workload is seen from the Nutanix CVM.

The Linux VM running postgres has two virtual disks:

  • One is taking transaction log writes.
  • The other is doing reads and writes from the main datafiles.

Since the database size is small (50% the size of the Linux RAM) – the data is mostly cached inside the guest, and so most reads do not hit storage. As a result we only see writes going to the DB files.

Additionally, we see that database datafile writes the arrive in a bursty fashion, and that these write bursts are more intense (~10x) than the log file writes.

Charts from Prometheus/Grafana showing IO rates seen from the perspective of the Linux guest VM

Despite the database flushes ocurring in bursts with a decent amount of concurrency the Nutanix CVM provides an average of 1.5ms write response time.

From the Nutanix CVM port 2009 handler, we can access the individual vdisk statistics. In this particular case vDisk 45269 is the data file disk, and 40043 is the database transaction log disk.

Datafile writes completed in 1.5millisecond average – despite deep queues during burst

The vdisk categorizer correctly identifies the database datafile write pattern as highly random.

Writes to the datbase datafiles are almost entirely random

As a result, the writes are passed into the replicated oplog

The burst of writes hits the oplog as expected

Meanwhile the log writes are categorized as mostly sequential, which is expected for a database log file workload.

Meanwhile, log file writes are mostly categorized as sequential.

Even though the log writes are sequential, they are low-concurrency and small size (looks like mostly 16K-32K). This write pattern is also a good candidate for oplog.

These low-concurrency log writes also hit oplog