Many storage devices/filesystems treat blocks containing nothing but zeros in a special way, often short-circuiting reads from the back-end. This is normally a good thing but this behavior can cause odd results when benchmarking. This typically comes up when testing against storage using raw devices that have been thin provisioned.
In this example, I have several disks attached to my linux virtual machine. Some of these disks contain data, but some of them have never been written to.
When I run an fio test against the disks, we can clearly see that the response time is better for some than for others. Here’s the fio output…
and here is the output of iostat -x
The devices sdf, sdg and sdh are thin provisioned disks that have never been written to. The read response times are be much lower. Even though the actual storage is identical.
There are a few ways to detect that the data being read is all zero’s.
Firstly use a simple tool like unix “od” or “hd” to dump out a small section of the disk device and see what it contains. In the example below I just take the first 1000 bytes and check to see if there is any data.
Secondly, see if your storage/filesystem has a way to show that it read zeros from the device. NDFS has a couple of ways of doing that. The easiest is to look at the 2009:/latency page and look for the stage “FoundZeroes”.
If your storage is returning zeros and so making your benchmarking problematic, you will need to get some data onto the disks! Normally I just do a large sequential write with whatever benchmarking tool that I am using. Both IOmeter and fio will write “junk” to disks when writing.