This unhelpful error is returned when using the Postman API collection. The problem is that for example when using a simple “List VMs” call – the URL which is sent contains placeholder values that cause the PC endpoint to return a 500 and the PLAT-10006 response. Check the postman console to see what it is […]
Using a small python script we can liberate data from the “Analysis” page of prism element and send it to prometheus, where we can combine cluster metrics with other data and view them all on some nice Grafana dashboards.
Overview For a fun afternoon project, how about a retro prometheus exporter using Apache/nginx, cgi-bin and bash!? About prometheus format A Prometheus exporter simply has to return a page with metric names and metric values in a particular format like below. When you configure prometheus via prometheus.yml you’re telling prometheus to visit a particular IP:Port […]
Using Nutanix API with prometheus push-gateway. Many customers would like to view their cluster metrics alongside existing performance data using Prometheus/Grafana Currently Nutanix does not provide a native exporter for Prometheus to use as a datasource. However we can use the prometheus push-gateway and a simple script which pulls from the native APIs to get […]
VM CPU Topology The topology (layout) that AHV presents virtual Sockets/CPU to the guest operating system will usually be different than the physical topology. This is expected because we typically present a subset of all cores to the guest VMs. Usually it is the total number of vCPU given to the VM that matters, not […]
TL;DR Using cloud-init with AHV is conceptually identical to using KVM/QEMU- we need to use a few different tools with AHV You will need a Linux image that is configured to use cloud-init. A good source is cloud-images.ubuntu.com We will create a cloud-init textual file and create a mountable version using the cloud-localds tool on […]
tl;dr In a recent experiment using Amazon RDS instance and a VM running in an on-prem Nutanix cluster, both using Skylake class processors with similar clock speeds and vCPU count. The SQLServer database on Nutanix delivered almost 2X the transaction rate as the same workload running on Amazon RDS. It turns out that migrating an […]
tl;dr A Nutanix cluster can persist a replicated write across two nodes in around 250 uSec which is critical for single-threaded DB write workloads. The performance compares very well with hosted cloud database instances using the same class of processor (db.r5.4xlarge in the figure below). The metrics below are for SQL insert transactions not the […]
Often in my lab I want to shutdown a large number of VMs quickly. In the example below I submit the power-off command for a maximum of 50 VMs in parallel. Be aware that we’re using the command line, and in line with true Unix philosophy the OS will assume we know what we are […]
AOS 6.1 greatly improved database performance on Nutanix especially when the guest VM uses just a single disk for all the database files. The underlying change is known as vdisk sharding. Basically it allows the Nutanix CVM to scale up the number of threads used to service a single virtual disk under heavy load.
In this example we use the KVM cloud image from the Canonical Ubuntu image repository. More information on Ubuntu cloud images is on the canonical cloud image page. More detail on the cloud image boot process and cloud-init here: Ubuntu UEC/Imanges. We can use the Ubuntu cloud image catalog, and specifically use one that has […]
We’ve come a long way, baby. Full disclosure. I have worked for Nutanix in the performance engineering group since 2013. My opinions are likely biased, but that also gives me a decent amount of context when it comes to the performance of Nutanix storage over time. We already have a lot of customers running database […]
With help from the Nutanix X-Ray team I have created an IO “benchmark” which simulates a “General Server Virtualization” workload. I call it the “Mixed Workload Simulator“
How can database density be measured? How does database performance behave as more DBs are consolidated? What impact does running the CVM have on available host resources? tl;dr The cluster was able to achieve ~90% of the theoretical maximum. CVM overhead was 5% for this workload.
Many storage performance testers are familiar with vdbench, and wish to use it to test Hyper-Converged (HCI) performance. To accurately performance test HCI you need to deploy workloads on all HCI nodes. However, deploying multiple VMs and coordinating vdbench can be tricky, so with X-ray we provide an easy way to run vdbench at scale. […]