Here be Zeroes

zero-cup

Many storage devices/filesystems treat blocks containing nothing but zeros in a special way, often short-circuiting reads from the back-end.  This is normally a good thing but this behavior can cause odd results when benchmarking.  This typically comes up when testing against storage using raw devices that have been thin provisioned.

In this example, I have several disks attached to my linux virtual machine.  Some of these disks contain data, but some of them have never been written to.

When I run an fio test against the disks, we can clearly see that the response time is better for some than for others.  Here’s the fio output…

fio believes that it is issuing the same workload to all disks.
fio believes that it is issuing the same workload to all disks.

and here is the output of iostat -x

Screen Shot 2014-06-12 at 4.29.23 PM

The devices sdf, sdg and sdh are thin provisioned disks that have never been written to. The read response times are be much lower.  Even though the actual storage is identical.

There are a few ways to detect that the data being read is all zero’s.

Firstly use a simple tool like unix “od” or “hd”  to dump out a small section of the disk device and see what it contains.  In the example below I just take the first 1000 bytes and check to see if there is any data.

Screen Shot 2014-06-12 at 4.58.15 PM

Secondly, see if your storage/filesystem has a way to show that it read zeros from the device.  NDFS has a couple of ways of doing that.  The easiest is to look at the 2009:/latency page and look for the stage “FoundZeroes”.

 

If your storage is returning zeros and so making your benchmarking problematic, you will need to get some data onto the disks!  Normally I just do a large sequential write with whatever benchmarking tool that I am using.  Both IOmeter and fio will write “junk” to disks when writing.

 

Designing a scaleout storage platform.

I was speaking to one of our developers the other day, and he pointed me to the following paper:  SEDA: An Architecture for Well-Conditioned, Scalable Internet Services as an example of the general philosophy behind the design of the Nutanix Distributed File System (NDFS).

Although the paper uses examples of both a webserver and a gnutella client, the philosophies are relevant to a large scale distributed filesystem.  In the case of NDFS we are serving disk blocks to clients who happen to be virtual machines.  One trade-off that is true in both cases is that scalability is traded for low latency in the single-stream case.  However at load, the response time is generally better than a system that is designed to low-latency, and then attempted to scale-up to achive high throughput.

At Nutanix we often talk about web-scale architectures, and this paper gives a pretty solid idea of what that might mean in concrete terms.

FWIW., according to google scholar, the paper has been cited 937 times, including Cassandra which is how we store filesystem meta-data in a distributed fashion.