New (Old) Paper.

A 2007 paper, that still has lots to say on the subject of benchmarking storage and filesystems. Primarily aimed at researchers and developers, but is relevant to anyone about to embark on a benchmarking effort.

  • Use a mix of macro and micro benchmarks
  • Understand what you are testing, cached results are fine – as long as that is what you had intended.

The authors are clear on why benchmarks remain important:

Ideally, users could test performance in their own settings using real work- loads. This transfers the responsibility of benchmarking from author to user. However, this is usually impractical because testing multiple systems is time consuming, especially in that exposing the system to real workloads implies learning how to configure the system properly, possibly migrating data and other settings to the new systems, as well as dealing with their respective bugs.”

We cannot expect end-users  to be experts in benchmarking. It is out duty as experts  to provide the tools (benchmarks) that enable users to make purchasing decisions without requiring years of benchmarking expertise.

Performance gains for postgres on Linux with hugepages

For this experiment I am using Postgres v11 on Linux 3.10 kernel. The goal was to see what gains can be made from using hugepages. I use the “built in” benchmark pgbench to run a simple set of queries.

Since I am interested in only the gains from hugepages I chose to use the “-S” parameter to pgbench which means perform only the “select” statements. Obviously this masks any costs that might be seen when dirtying hugepages – but it kept the experiment from having to be concerned with writing to the filesystem.

Experiment

The workstation has 32GB of memory
Postgres is given 16GB of memory using the parameter


pgbench creates a ~7.4gb database using a scale-factor of 500

Run the experiment like this

Result

Default : No hugepages :
tps = 62190.452850 (excluding connections establishing)

2MB Hugepages
tps = 66864.410968 (excluding connections establishing)
+7.5% over default

1GB Hugepages
tps = 69702.358303 (excluding connections establishing)
+12% over default

Enabling hugepages

Getting the default hugepages is as easy as entering a value into /etc/sysctl.conf. To allow for 16GB of hugepages I used the value of 8400, followed by “sysctl -p”

To get 1GB hugepages, the kernel has to have it configured during boot e.g.

Then reboot the kernel

I used these excellent resources
How to modify the kernel command line
How to enable hugepages
and this great video on Linux virtual memory



Database compression on Nutanix

Compressability

What space savings should you expect when running databases with default compression in a Nutanix cluster? When we ran the TPCx-HCI benchmark on our cluster we realized about 2:1 savings from compression alone. The TPCx-HCI benchmark mimics a database consolidation setup, meaning that there are many databases per host. The uncompressed data size was about 45TB.

Compression+Encryption

Additionally, we configured data at rest encryption (DARE). Using the cluster features allows us to both compress and encrypt (compression first, then encrypt). If the database engine itself handled encryption, it would reduce the ability to compress.

Data generation

Like ZFS, the Nutanix filesystem uses LZ4 compressiom and 2:1 is about in-line with expectations for a realistic dataset. The TPCx-HCI benchmark uses the E-Gen data generation tool to populate the databases. E-Gen was developed for the TPC-E benchmark and uses sources such as census data and NYSE stock listings to generate real data rather than machine generated strings.

TPCx-HCI Data