When it comes to assessing fitness of purpose, even audited benchmarks are quite useless unless they incorporate failure testing alongside the load test. I helped develop the TPCx-HCI benchmark which mandates the simulation of a node failure.
We have started seeing misaligned partitions on Linux guests runnning certain HDFS distributions. How these partitions became mis-aligned is a bit of a mystery, because the only way I know how to do this on Linux is to create a partition using old DOS format like this (using -c=dos and -u=cylinders) Continue reading
Often we are presented with a vCenter screenshot, and an observation that there are “high latency spikes”. In the example, the response time is indeed quite high – around 80ms. Continue reading
One way of categorizing Hyperconverged filesystems (or any filesystem really) is by how data is distributed across the nodes, and the method used to track/retrieve that data. The following is based on knowledge of the internals of Nutanix and publicly available information for the other systems.
|Distributed||Distributed data & metadata||Nutanix||Hash||Random data distribution, hash-lookup (object store)||VSAN||Dedupe||Data stored in HA-Pairs, Lookup by fingerprint||Simplivity||Dedupe||Random data distribution, Lookup by fingerprint||Springpath/Hyperflex||Psuedo Distributed||Data stored in HA pairs, Unified namespace via redirection||NetApp C-Mode|
Today I used fio to create some compressible data to test on my Nutanix nodes. I ended up using the following fio params to get what I wanted.
buffer_compress_percentage=50 refill_buffers buffer_pattern=0xdeadbeef
Much of this is well explained in the README for latest version of fio.
Also NOTE Older versions of fio do not support many of the fancy data creation flags, but will not alert you to the fact that fio is ignoring them. I spent quite a bit of time wondering why my data was not compressed, until I downloaded and compiled the latest fio.