If you clone a Cassandra VM with the goal of creating a cassandra cluster – you may find that every Cassandra node has the same hostID.
Continue readingUsing rwmixread and rate_iops in fio
Creating a mixed read/write workload with fio can be a bit confusing. Assume we want to create a fixed rate workload of 100 IOPS split 70:30 between reads and writes.

TL;DR
Specify the rate directly with rate_iops=<read-rate>,<write-rate> do not try to use rwmixread with rate_iops. For the example above use.
rate_iops=70,30
Additionally older versions of fio exhibit problems when using rate_poisson with rate_iops . fio version 3.7 that I was using did not exhibit the problem.
Continue readingUnderstanding fio norandommap and randrepeat parameters
The parameters norandommap and randrepeat significantly change the way that repeated random IO workloads will be executed, and also can meaningfully change the results of an experiment due to the way that caching works on most storage system.
Continue readingHow to drop tables for HammerDB TPC-C on SQL Server
From the SQL Window of SQL*Server. Issue these commands to drop the tables and procedures created by HammerDB. This will allow you (for instance) to re-create the database, or create a new database with more warehouses (larger size) while retaining the same name/DB layout.
Continue readingUnderstanding Concurrency Parameters in pgbench
A Generalized workload generator for storage IO

With help from the Nutanix X-Ray team I have created an IO “benchmark” which simulates a “General Server Virtualization” workload. I call it the “Mixed Workload Simulator“
Continue readingAdvanced X-Ray: reducing runtime by re-using VMs.

How to speed up your X-ray benchmark development cycle by re-using/re-cycling benchmark VMs and more importantly data-sets.
Continue readingCross rack network latency in AWS
I have VMs running on bare-metal instances. Each bare-metal instance is in a separate rack by design (for fault tolerance). The bandwidth is 25GbE however, the response time between the hosts is so high that I need multiple streams to consume that bandwidth.
Compared to my local on-prem lab I need many more streams to get the observed throughput close to the theoretical bandwidth of 25GbE
# iperf Streams | AWS Throughput | On-Prem Throughput |
1 | 4.8 Gbit | 21.4 Gbit |
2 | 9 Gbit | 22 Gbit |
4 | 18 Gbit | 22.5 |
8 | 23 Gbit | 23 Gbit |
How to performance test Nutanix on AWS with X-ray
Postgres pgbench scale-factors and WSS
Nutanix X-Ray video Series
A series of videos showing how to install, run, modify and analyze HCI clusters with the Nutanix X-ray tool
Continue readingHow to download and Install Nutanix X-ray on an AHV cluster
Identifying Optane drives in Linux

How to identify optane drives in linux OS using lspci.
Continue readingHow to drop tables for HammerDB TPC-H on SQL Server
Use the following SQL to drop the tables and indexes in the HammerDB TPC-H schema, so that you can re-load it.
Continue readingMicrosoft diskspd Part 3. Oddities and FAQ

Tips and tricks for using diskspd especially useful for those familar with tools like fio
Continue readingMicrosoft diskspd. Part 2 How to bypass NTFS Cache.

How to ensure performance testing with diskspd is stressing the underlying storage devices, not the OS filesystem.
Continue readingMicrosoft diskspd. Part 1 Preparing to test.

How to install and setup diskspd before starting your first performance tests and avoiding wrong results due to null byte issues.
Continue readingHow to measure database scaling & density on Nutanix HCI platform.
How can database density be measured?
- How does database performance behave as more DBs are consolidated?
- What impact does running the CVM have on available host resources?
tl;dr
- The cluster was able to achieve ~90% of the theoretical maximum.
- CVM overhead was 5% for this workload.
How to run vdbench benchmark on any HCI with X-Ray
Many storage performance testers are familiar with vdbench, and wish to use it to test Hyper-Converged (HCI) performance. To accurately performance test HCI you need to deploy workloads on all HCI nodes. However, deploying multiple VMs and coordinating vdbench can be tricky, so with X-ray we provide an easy way to run vdbench at scale. Here’s how to do it.
Continue readingHow to identify NVME drive types and test throughput
