SQL*Server on Nutanix. Force backups to HDD.

As an experiment, I wanted to (a) Create a HDD only container, and (b) measure the bandwidth I could achieve when backing up the SQL DB.  This was performed on a standard hybrid platform with only 4 HDD’s in the node.

First create a container, but add the special options “sequential-io-priority-order=DAS-SATA random-io-priority-order=DAS-SATA” which means that all IO will be directed to the HDD only. This also means that data on this container will never be migrated up. This is just fine for a backup that will hopefully never be read, and if it is – only once, sequentially.

ncli> ctr create name=cold-only sequential-io-priority-order=DAS-SATA random-io-priority-order=DAS-SATA sp-name=all
ncli> datastore create name=cold ctr-name=cold-only

Next create a vDisk in that container – this disk will contain the SQL Server backup data

Add vdisk to the cold-only container.
Add vdisk to the cold-only container.

Format and initialize the drive.

Format the drive to hold SQL backup.
Format the drive to hold SQL backup.

Add backup targets to the drive. Adding multiple targets increases throughput because SQL Server will generate 1-2 outstanding IO’s per target. I created 16 total targets (these are just files).

SQL Backup targets

The first backup is a little slow (~64MB/s), because we’re creating the files. A second (and subsequent) backups go faster, around  120 MB/s writing directly to the HDD spindles on a single node with 4 HDDs.

Overwrite old backups

This backup stream drives around 25MB/s per HDD spindle on the Nutanix node.  On a larger platform with more spindles – we could easily drive 500MB/s, and still skip SSD by writing directly to HDD.

25MB/s per spindle

120 MB/s Each way
Backup just started. About 115MB/s read, 115MB/s write on same node.

Completed backup:

Backup complete

Things to know when using vdbench.

Recently I found that vdbench was not giving me the amount of outstanding IO that I had intended to configure by using the “threads=N” parameter. It turned out that with Linux, most of the filesystems (ext2, ext3 and ext4) do not support concurrent directIO, although they do support directIO. This was a bit of a shock coming from Solaris which had concurrent directIO since 2001.

All the Linux filesystems I tested allow multiple outstanding IO’s if the IO is submitted using asynchronous IO (AKA asyncIO or AIO) but not when using multiple writer threads (except XFS). Unfortunately vdbench does not allow AIO since it tries to be platform agnostic.

fio however does allow either threads or AIO to be used and so that’s what I used in the experiments below.

The column fio QD is the amount of outstanding IO, or Queue Depth that is intended to be passed to the storage device. The column iostat QD is the actual Queue Depth seen by the device. The iostat QD is not “8” because the response time is so low that fio cannot issue the IO’s quickly enough to maintain the intended queue depth.

Device
fio QD
fio QD Type
direct
iostat QD
 ps -efT | grep fio | wc -l
/dev/sd
8
libaio
Yes
7
5
/dev/sd
8
Threads
Yes
7
12
ext2 fs (mke2fs)
8
Threads
Yes
1
12
ext2 fs (mke2fs)
8
libaio
Yes
7
5
ext3 (mkfs -t ext3)
8
Threads
Yes
1
12
ext3 (mkfs -t ext3)
8
libaio
Yes
7
5
ext4 (mkfs -t ext4)
8
Threads
Yes
1
12
ext4 (mkfs -t ext4)
8
libaio
Yes
7
5
xfs (mkfs -t xfs)
8
Threads
Yes
7
12
xfs (mkfs -t xfs)
8
libaio
Yes
7
5

At any rate, all is not lost – using raw devices (/dev/sdX) will give concurrent directIO, as will XFS. These issues are well known by Linux DB guys, and I found interesting articles from Percona and Kevin Closson after I finally figured out what was going on with vdbench.

fio “scripts”

For the “threads” case.

[global]
bs=8k
ioengine=sync
iodepth=8
direct=1
time_based
runtime=60
numjobs=8
size=1800m

[randwrite-threads]
rw=randwrite
filename=/a/file1

For the “aio” case

[global]
bs=8k
ioengine=libaio
iodepth=8
direct=1
time_based
runtime=60
size=1800m


[randwrite-aio]
rw=randwrite
filename=/a/file1

Work around for bios.hddOrder when creating an OVF/OVA template.

When changing SCSI devices in an ESX based VM, it’s easy to screw up the ability to boot.  The simple fix is to add

bios.hddOrder = “scsi0:0”

to the end of the .vmx file.  This has always worked for me.  The problem with this solution is that any OVF/OVA that is created from the VM will not include the .vmx file hack, and of course VM’s created from the template will not boot until their .vmx file is hand edited.

The solution that worked for me was to simply make the “boot drive” the first .vmdk file that is listed in the .vmx file.  In my case, the Linux OS is stored on the VMDK named “disk.vmdk”

In the before case, this disk is listed last (even though it has SCSI ID 0:0:0) and the VM does not boot.

I simply change the filename from disk_6.vmdk to disk.vmdk (and change the last item from disk.vmdk to disk_6.vmdk).

The beauty of this method is that the ordering is maintained when creating an OVF/OVA.

When the VM boots, the /dev/sd devices may change since the vmdk’s are now attached to different SCSI devices – so mounting using UUID’s in Linux helps keep things sane.

Before

:floppy0.fileName = "Floppy 0"
ide1:0.startConnected = "FALSE"
ide1:0.deviceType = "atapi-cdrom"
ide1:0.clientDevice = "TRUE"
ide1:0.fileName = "CD/DVD drive 0"
ide1:0.present = "TRUE"
scsi3:0.deviceType = "scsi-hardDisk"
scsi3:0.fileName = "disk_6.vmdk"
scsi3:0.present = "TRUE"
scsi3:1.deviceType = "scsi-hardDisk"
scsi3:1.fileName = "disk_1.vmdk"
scsi3:1.present = "TRUE"
scsi2:0.deviceType = "scsi-hardDisk"
scsi2:0.fileName = "disk_2.vmdk"
scsi2:0.present = "TRUE"
scsi2:1.deviceType = "scsi-hardDisk"
scsi2:1.fileName = "disk_3.vmdk"
scsi2:1.present = "TRUE"
scsi1:0.deviceType = "scsi-hardDisk"
scsi1:0.fileName = "disk_4.vmdk"
scsi1:0.present = "TRUE"
scsi1:1.deviceType = "scsi-hardDisk"
scsi1:1.fileName = "disk_5.vmdk"
scsi1:1.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "disk.vmdk"
scsi0:0.present = "TRUE"
vmci0.pciSlotNumber = "32"
1Gethernet0.virtualDev = "vmxnet3

After

:floppy0.fileName = "Floppy 0"
ide1:0.startConnected = "FALSE"
ide1:0.deviceType = "atapi-cdrom"
ide1:0.clientDevice = "TRUE"
ide1:0.fileName = "CD/DVD drive 0"
ide1:0.present = "TRUE"
scsi3:0.deviceType = "scsi-hardDisk"
scsi3:0.fileName = "disk.vmdk"
scsi3:0.present = "TRUE"
scsi3:1.deviceType = "scsi-hardDisk"
scsi3:1.fileName = "disk_1.vmdk"
scsi3:1.present = "TRUE"
scsi2:0.deviceType = "scsi-hardDisk"
scsi2:0.fileName = "disk_2.vmdk"
scsi2:0.present = "TRUE"
scsi2:1.deviceType = "scsi-hardDisk"
scsi2:1.fileName = "disk_3.vmdk"
scsi2:1.present = "TRUE"
scsi1:0.deviceType = "scsi-hardDisk"
scsi1:0.fileName = "disk_4.vmdk"
scsi1:0.present = "TRUE"
scsi1:1.deviceType = "scsi-hardDisk"
scsi1:1.fileName = "disk_5.vmdk"
scsi1:1.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "disk_6.vmdk"
scsi0:0.present = "TRUE"
vmci0.pciSlotNumber = "32"
1Gethernet0.virtualDev = "vmxnet3

Note: I tried editing the .ovf file and adding a key:value pair to the file, and regenerating the SHA1 and stashing the SHA1 in the .mf file.  The process worked, but the VM still did not boot, and the bios.hddOrder param was not in the .vmx file of the VM that was created from the template.

SATA on Nutanix. Some experimental data.

The question of  why  Nutanix uses SATA drive comes up sometimes, especially from customers who have experienced very poor performance using SATA on traditional arrays.

I can understand this anxiety.  In my time at NetApp we exclusively used SAS or FC-AL drives in performance test work.  At the time there was a huge difference in performance between SCSI and SATA.  Even a few short years ago, FC typically spun at 15K RPM whereas SATA was stuck at about a 5K RPM, so experiencing 3X the rotational delay.

These days SAS and SATA are both available in 7200 RPM configurations, and these are the type we use in standard Nutanix nodes.  In fact the SATA drives that we use are marketed by Seagate as “Nearline SAS”  or NL-SAS.   Mainly to differentiate them from the consumer grade SATA drives that are found in cheap laptops.  There are hundreds of SAS Vs SATA articles on the web, so I won’t go over the theoretical/historical arguments.

SATA in Hybrid/Tiered Storage

In a Nutanix cluster the “heavy lifting” of IO is mainly done by the SSD’s – leaving the SATA drives to service the few remaining IO’s that miss the SSD tier.  Under moderate load, the SATA spindles do pretty well, and since the SATA  $/GB is only 60% of SAS.  SATA seems like a good choice for mostly-cold data.

Let’s Experiment.

From a performance perspective,  I decided to run a few experiments to see just how well SATA performs.  In the test, the  SATA drives are Nutanix standard drives “ST91000640NS” (Seagate, priced around $150).  The comparable SAS drives are the same form-factor (2.5 Inch)  “AL13SEB900” (Toshiba, priced at about $250 USD).  These drives spin at 10K RPM.  Both drives hold around 1TB.

There are three experiments per drive type to reveal the impact of seek-times.  This is achieved using the “filesize” parameter of fio – which determines the LBA range to read.  One thing to note, is that I use a queue-depth of one.  Therefore IOPs can be calculated as simply 1/Response-Time (converted to seconds).

[global]
bs=8k
[randread]
rw=randread
iodepth=1
ioengine=libaio
time_based
runtime=10
direct=1
filesize=1g
filename=/dev/sdf1

Random Distribution. SATA Vs SAS

Working Set Size 7.2K RPM SATA Response Time (ms) 10K RPM SAS Response Time (ms)
1G 5.5 4
100G 7.5 4.5
1000G 12.5 7

Zipf Distribution SATA Only.

Working Set Size Response Time (ms)
1000G 8.5

Somewhat intuitively as the workingset (seek) gets larger, the difference between “Real SAS” and “NL-SAS/SATA” gets wider.  This is intuitive because with a 1G working-set,  the seek-time is close to zero, and so only the rotational delay (based on RPM) is a factor.  In fact the difference in response time is the same as the difference in rotational speed (1:1.3).

Also  (just for fun) I used the “random_distribution=zipf” function in fio to test the response time when reading across the entire range of the disk – but with a “hotspot” (zipf) rather than a uniform random read – which is pretty unrealistic.

In the “realistic” case – reading across the entire disk on the SATA drives shipped with Nutanix nodes is capable of 8.5 ms response time at 125 IOPS per spindle.

 Conclusion

The performance difference between SAS and SATA is often over-stated.  At moderate loads SATA performs well enough for most use-cases.  Even when delivering fully random IO over the entirety of the disk – SATA can deliver 8K in less than 15ms.  Using a more realistic (not 100% random) access pattern the response time is  < 10ms.

For a properly sized Nutanix implementation, the intent is to service most IO from Flash. It’s OK to generate some work on HDD from time-to-time even on SATA.

Impact of Paravirtual SCSI driver VS LSI Emulation with Data.

TL;DR  Comparison of Paravirtual SCSI Vs Emulated SCSI in with measurements.  PVSCSI gives measurably better response times at high load. 

During a performance debugging session, I noticed that the response time on two of the SCSI devices was much higher than the others (Linux host under vmware ESX).  The difference was unexpected since all the devices were part of the same stripe doing a uniform synthetic workload.

iostat output from the system under investigation.

 

The immediate observation is that queue length is higher, as is wait time.  All these devices reside on the same back-end storage so I am looking for something else.  When I traced back the devices it turned out that the “slow devices” were attached to LSI emulated controllers in ESX.  Whereas the “fast devices” are attached to para-virtual controllers.

I was surprised to see how much difference using para virtual (PV) SCSI drivers made to the guest response time once IOPS started to ramp up.  In these plots the y-axis is iostat “await” time.  The x-axis is time (each point is a 3 second average).

PVSCSI = Gey Dots
LSI Emulated SCSI = Red Dots
Lower is better.

 

Each plot is from a workload which  uses a different offered IO rate.  The offered rates are   8000,9000 and 10,000 the storage is able to meet the rates even though latency increases because there is a lot of outstanding IO.  The workload is mixed read/write with bursts.

8K IOPS9K IOPS10K IOPS

 

After converting sdh and sdi to PV SCSI the response time is again uniform across all devices.

10K IOPS PV