How to measure database scaling & density on Nutanix HCI platform.

How can database density be measured?

  • How does database performance behave as more DBs are consolidated?
  • What impact does running the CVM have on available host resources?

tl;dr

  • The cluster was able to achieve ~90% of the theoretical maximum.
  • CVM overhead was 5% for this workload.

Experiment setup

The goal was to establish how database performance is affected as additional database workloads are added into the cluster. As a secondary metric, measure the overhead from running the virtual storage controller on the same host as the database servers themselves. We use the Postgres database with pgbench workload and measure the total transactions per second.

Cluster configuration

  • 4 Node Nutanix cluster, with 2x Xeon CPU’s per host with 20 cores per socket.

Database configuration

Each database is identically configured with

  • Postgres 9.3
  • Ubuntu Linux
  • 4 vCPU
  • 8GB of memory
  • pgbench benchmark, running the “simple” query set.

The database is sized so that it fits entirely in memory. This is a test of CPU/Memory not IO.

Experiment steps.

The experiment starts with a single Database on a single host. We add more databases into the cluster until we reach 40 databases in total. At 40 databases with 4 vCPU each and a CPU bound workload we use all 160 CPU cores on the cluster.

The database is configured to fit into the host DRAM memory, and the benchmark runs as fast as it can – the benchmark is CPU bound.

Results

Below are the measured results from running 1-40 databases on the 4 node cluster.

Performance scales almost linearly from 4 to 160 CPU with no obvious bottlenecks before all of the CPU cores are saturated in the host at 40 databases.

Scaling from 1 Databases to 40 on a 4 node cluster.
Continue reading

Measuring CPU performance with X-Ray and pgbench.

Nutanix X-Ray is well known for being able to model IO/Storage workloads, but what about workloads that are CPU bound?

X-Ray can run Ansible scripts on the X-Ray worker VMs, and by doing so we are able to provision almost any application. For our purposes we are going to use Postgres DB and the built-in benchmarking tool PGbench. I have deliberately created a very small DB which fits into the VM memory and does almost no IO.

The X-ray workload files can be found here.

X-Ray interface to pgbench

Using standard X-Ray YAML we are able to pass custom parameters such as how many Postgres VMs to deploy, how many clients and how many threads for pgbench to run.

When X-Ray runs the workload the results are displayed in the X-Ray UI. This time though the metric is Database transactions per second not IOPS or Storage throughput.

PGbench Transactions per second are displayed in the X-ray UI.

By running a variety of experiments, altering the number of VMs running the workload, I was able to plot the aggregate transactions/s and the per-VM value, which as expected decreases once the host platform CPU is saturated.

I was able to use the CPU bound nature of this particular workload to take a look at scheduling and CPU usage characteristics of different hypervisors and CPU types. I found that one combination gave better performance at low loads, but the other combination generated better performance under higher loads.

Per VM transaction count drops sharply once the number of VMs exceeds the CPU capacity of the host.

These sorts of experiments are quite straight-forward using X-Ray and Ansible. A special shout-out goes to GV who created the custom exporter to send the postgres transaction/s results back to X-ray in realtime.

Impact of Data locality on DB workloads.

Effect of removing CPU constraints and maintaining data locality on a running DB instance.

In this video I migrate a Postgres DB running PGbench benchmark. The DB is running on a Host which is CPU constrained. Once the VM is migrated to a less busy host the transaction rate immediately increases from ~15,000 to ~20,000. As the DB continues to run on the new host – the Nutanix storage detects the access patterns and “localizes” the data that the DB is accessing. Over the subsequent minutes the transaction rate increases to ~30,000 TPS.

The variation in the transaction rate is due to the benchmark itself, the transaction rate is not expected to be uniform. Many different queries are executing in parallel, some hitting RAM cache, some hitting storage.

N.B The Postgres DB is totally un-tuned and contains purely default settings. The aim of the experiment was to see how data-locality might affect a running database workload, not to generate the maximum TPS.