I have VMs running on bare-metal instances. Each bare-metal instance is in a separate rack by design (for fault tolerance). The bandwidth is 25GbE however, the response time between the hosts is so high that I need multiple streams to consume that bandwidth. Compared to my local on-prem lab I need many more streams to […]
End to End Creation of a Nutanix Cluster on AWS and Running X-Ray
Scale factor to workingset size lookup for tiny databases
A series of videos showing how to install, run, modify and analyze HCI clusters with the Nutanix X-ray tool
How to identify optane drives in linux OS using lspci.
Use the following SQL to drop the tables and indexes in the HammerDB TPC-H schema, so that you can re-load it.
Tips and tricks for using diskspd especially useful for those familar with tools like fio
How to ensure performance testing with diskspd is stressing the underlying storage devices, not the OS filesystem.
How to install and setup diskspd before starting your first performance tests and avoiding wrong results due to null byte issues.
How can database density be measured? How does database performance behave as more DBs are consolidated? What impact does running the CVM have on available host resources? tl;dr The cluster was able to achieve ~90% of the theoretical maximum. CVM overhead was 5% for this workload.
Many storage performance testers are familiar with vdbench, and wish to use it to test Hyper-Converged (HCI) performance. To accurately performance test HCI you need to deploy workloads on all HCI nodes. However, deploying multiple VMs and coordinating vdbench can be tricky, so with X-ray we provide an easy way to run vdbench at scale. […]
First things First Why do we tend to use 1MB IO sizes for throughput benchmarking? To achieve the maximum throughput on a storage device, we will usually use a large IO size to maximize the amount of data is transferred per IO request. The idea is to make the ratio of data-transfers to IO requests […]
The real-world achievable SSD performance will vary depending on factors like IO size, queue depth and even CPU clock speed. It’s useful to know what the SSD is capable of delivering in the actual environment in which it’s used. I always start by looking at the performance claimed by the manufacturer. I use these figures […]
How to install Prometheus on OS-X Install prometheus $ cd /Users/gary.little/Downloads/prometheus-2.16.0-rc.0.darwin-amd64$ ./prometheus Add a collector/scraper to monitor the OS Prometheus itself does not do much apart from monitor itself, to do anything useful we have to add a scraper/exporter module. The easiest thing to do is add the scraper to monitor OS-X itself. As in […]
Some versions of HammerDB (e.g. 3.2) may induce imbalanced NUMA utilization with SQL Server. This can easily be observed with Resource monitor. When NUMA imbalance occurs one of the NUMA nodes will show much larger utilization than the other. E.g. The cause and fix is well documented on this blog. In short HammerDB issues a […]
How to avoid bottlenecks in the client generator when measuring database performance with HammerDB
An X-ray workload for measuring application density
The vertica vioperf tool is used to determine whether the storage you are planning on using is fast enough to feed the vertica database. When I initially ran the tool, the IO performance reported by the tool and confirmed by iostat was much lower than I expected for the storage device (a 6Gbit SATA device […]