How to run vertica vioperf tool

The vertica vioperf tool is used to determine whether the storage you are planning on using is fast enough to feed the vertica database.

The vioperf tool runs on a linux host or VM and can be pointed at any filesystem just like fio or vdbench

Simple execution of vioperf writing to the location /vertica

Working Set Size

Unlike traditional IO generators vioperf does not allow you to specify the workings size. The amount of data written is simply the IO rate achieved by the storage * runtime. So, fast storage with long run-times will need a lot of capacity otherwise the tool simply fills the partition and crashes!

Measurement and goodness

The primary metric is MB/s Per-Core. The idea is that you give 1 Thread per core in the system, though there is nothing stopping you from using whatever –thread-count value you like.

Although the measure is throughput, the measure (Throughput/Core) does not improve just by giving lots of concurrency. Concurrency is generated purely by the number of threads and since the measure of goodness is Throughput/Core (or per thread) it’s not possible to simply create throughput from concurrency alone.

I guess we assume that each vertica thread is synchronous. Which seems a bit odd.

Throughput compared to FIo

Compared to fio the reported throughput is lower for the same device and same degree of concurrency. Vertica continually writes, and extends the files so there is some filesystem work going on whereas fio is overwriting an existing file.

vertica iostat 64 Threads

fio iostat iodepth=64

fio 1 thread 1M sequential write

Vertica IOstat 1 thread

vioperf with –disable-crc

Impact of Data locality on DB workloads.

Effect of removing CPU constraints and maintaining data locality on a running DB instance.

In this video I migrate a Postgres DB running PGbench benchmark. The DB is running on a Host which is CPU constrained. Once the VM is migrated to a less busy host the transaction rate immediately increases from ~15,000 to ~20,000. As the DB continues to run on the new host – the Nutanix storage detects the access patterns and “localizes” the data that the DB is accessing. Over the subsequent minutes the transaction rate increases to ~30,000 TPS.

The variation in the transaction rate is due to the benchmark itself, the transaction rate is not expected to be uniform. Many different queries are executing in parallel, some hitting RAM cache, some hitting storage.

N.B The Postgres DB is totally un-tuned and contains purely default settings. The aim of the experiment was to see how data-locality might affect a running database workload, not to generate the maximum TPS.

Benchmarking with Postgres PT2

In this example we run pgbench with a scale factor of 1000 which equates to a database size of around 15GB. The linux VM has 32G RAM, so we don’t expect to see many reads.

Using prometheus with the Linux node exporter we can see the disk IO pattern from pgbench. As expected the write pattern to the log disk (sda) is quite constant, while the write pattern to the database files (sdb) is bursty.

pgbench with DB size 50% of Linux buffer cache.

I had to tune the parameter checkpoint_completion_target from 0.5 to 0.9 otherwise the SCSI stack became overwhelmed during checkpoints, and caused log-writes to stall.

default pgbench – notice the sharp drop in log-writes before tuning.