Beware of tiny working-set-sizes when testing storage performance.

I was recently asked to investigate why Nutanix storage was not as fast as a competing solution in a PoC environment. When I looked at the output from diskspd, the data didn’t quite make sense.

One thing that seemed really odd was that the working set size for the tests were in the order of 8-64MB, which is strange because the target workload was databases. Bear in mind that in 2022 modern CPUs have 10’s of MB of on-die cache, so using a WSS of the same order of magnitude for a disk test seemed a bit unusual.

I was able to replicate the performance delta in my lab using fio – so there was no problem with the tool, or the way the experiment was conducted in the PoC environment.

It turns out that our competitor has a really fast, but really small cache – possibly built into the hypervisor. It would appear that in my lab at least – this host side cache is around 512MB in size. Once that tiny cache is exhausted the performance delta decreases rapidly and the Nutanix solution was faster on the same hardware (identical CPU/Memory and SSD) once the WSS was in the >1 gigabyte range.

The moral of the story is that using reasonable working set sizes really matters when testing storage – even though it can take a bit more time to create the test data – it is worth it because tiny but fast caches can dramatically skew results.

The table below shows the results from fio. In this test the blocksize is 64K, queue depth is 64 and the workingset varies as described on the X-Axis. The response times are shown in microseconds. For the sake of comparison I added the results from a consumer Samsung NVME drive on Haswell generation bare metal running Ubuntu Linux in my home lab.

Nutanix Vs Other HCI with Local NVME for comparison.

Leave a Comment