When it comes to assessing fitness of purpose, even audited benchmarks are quite useless unless they incorporate failure testing alongside the load test. I helped develop the TPCx-HCI benchmark which mandates the simulation of a node failure.
We have started seeing misaligned partitions on Linux guests runnning certain HDFS distributions. How these partitions became mis-aligned is a bit of a mystery, because the only way I know how to do this on Linux is to create a partition using old DOS format like this (using -c=dos and -u=cylinders) Continue reading
One way of categorizing Hyperconverged filesystems (or any filesystem really) is by how data is distributed across the nodes, and the method used to track/retrieve that data. The following is based on knowledge of the internals of Nutanix and publicly available information for the other systems.
|Distributed||Distributed data & metadata||Nutanix||Hash||Random data distribution, hash-lookup (object store)||VSAN||Dedupe||Data stored in HA-Pairs, Lookup by fingerprint||Simplivity||Dedupe||Random data distribution, Lookup by fingerprint||Springpath/Hyperflex||Psuedo Distributed||Data stored in HA pairs, Unified namespace via redirection||NetApp C-Mode|