This time, to close all the gaps and answer all the questions, I’m gonna evaluate the performance of NFS with Ubuntu Server 17.10 OS used as a server, and Ubuntu Server 17.10 VM as a client running on top of VMware ESXi host.

Take a look at the environment configuration:

Host 1

  • Intel Xeon E5-2670 v4
  • 128GB RAM Kingston
  • 1x HDD Seagate 1TB
  • 4xIntel DC S3610 Series 480GB
  • Mellanox ConnectX-3 network adapter 10GbE

Host 2

  • Intel Xeon E5-2670 v4
  • 128GB RAM Kingston
  • 1x HDD Seagate 1TB
  • Mellanox ConnectX-3 network adapter 10GbE

…and the setup scheme:

Testing tools

To evaluate the NFS performance, I’ve used FIO v3.3. utility.

In order to use the maximum performance our SSD is capable of, I’ve defined the optimal load parameters for FIO (the number of threads and outstanding I/O value).

The tests showed that with FIO, the maximum SSD performance can be achieved under 8 threads and outstanding I/O value 4. Therefore, I’ll hold all further tests under these launching parameters.

I guess there’s no need to tell you about FIO since we’re already pretty familiar with this tool. I’ll just note that depending on the OS used, you have to choose a proper input/output driver:

  • For Windows: ioengine=windowsaio
  • For Linux: Linux: ioengine=libaio

Well, libaio it is.

For testing with FIO, I’ve used the following parameters:

Testing RAID performance

Prior to NFS performance testing, I’ve built RAID-0 on the server (Host 1) out of 4xIntel DC S3610 Series 480GB and measured the initial performance of the underlying storage. Take a look at the results:

RAID-0-FIO-Ubuntu Server 17.10

I’ve performed all further tests inside the ESXi VM.

Here are the VM parameters:

  • 32 CPU, сore per socket – 1;
  • 8GB Memory;
  • 1 disk – 35GB.

 

Preparing the VM for NFS performance testing

In order to evaluate the NFS performance, I’ve deployed the NFS server on Host 1. The NFS server has been installed using the following command:

Preparing the VM for NFS performance testing - 1

I’ve created the nfs folder (/home/nfs) and set full permissions (chmod 777) with the following command:

Preparing the VM for NFS performance testing - 2

To display a list of available block devices, I’ve used the lsblk command.

Preparing the VM for NFS performance testing - 3

Further, I’ve formatted the disk array into ext4 using the mkfs.ext4 /dev/sda command.

To mount the sda device (RAID-0 out of 4x SSDs) with ext4 file system into the previously created nfs folder (/home/nfs), I’ve used the following command:

Preparing the VM for NFS performance testing - 4

The exports file (/etc/exports) indicates client’s IP address and access parameters to the shared folder.

Preparing the VM for NFS performance testing - 5

On Host 2 (ESXi host), I’ve created a new NFS Datastore backed by the previously created NFS share on Host 1:

creating a new NFS Datastore backed by the previously created NFS share on Host 1

Next, in the VM, I’ve added a new disk in NFS Datastore:

adding a new disk in NFS Datastore

When connecting the NFS disk to the VM, the only possible virtual disk provisioning policy you can choose is Thin Provision.

You can find more info on virtual disk provisioning policies on the VMware official website: https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-4C0F4D73-82F2-4B81-8AA7-1DD752A8A5AC.html

So, after applying all the settings and turning the VM on, type in the lsblk command. As you can see, the connected disk (sdb – 1.7T) has been successfully identified.

Preparing the VM for NFS performance testing - 6

 

 

Comparing VM and server CPU workload during NFS performance testing

To compare the CPU workload, I’ve tested NFS under 4k random read pattern using FIO utility. You can see the results in the image below.

CPU workload, FIO (4k random read)

Image 1 – CPU workload, FIO (4k random read)

Well, the VM CPU workload has been spread among the cores specified in the FIO configuration parameters. On the contrary, the server CPU workload can’t be distributed since the tesing was held on the virtual machine.

 

The results of testing VM’s disk subsystem performance:

OK, now it’s time to measure the NFS performance with Ubuntu Server 17.10 used as a server OS, and Ubuntu Server 17.10 VM as a client running on top of ESXi host. Let’s name this scenario “Linux to Linux” to make it simple for comparing test results. You can see the outcome in the screenshot below:

NFS-FIO-Linux

In the previous article, I’ve also tested other two scenarios – NFS Windows vs Linux performance where:

  1. ESXi client on Windows Server 2016 was connected to the server on Windows Server 2016. Let’s call this scenario “Windows to Windows” to make it sound good.
  2. ESXi client on Windows Server 2016 connected to the server on Linux Ubuntu Server 17.10. Simply “Windows to Linux”.

NFS-FIO-Linux-Server & NFS-FIO-Windows-Server

 

Summarizing the results

So we already know that NFS performs much better on Linux than on Windows. What about its performance when we have Linux to Linux scenario? Numbers will tell us. First, 4k 100%random 100%read pattern, and Linux to Linux performs 57.85% better than Windows to Windows and just 0.11% better than Windows to Linux. Next, 4k 100%random 100%write – here, Linux to Linux performance is 87.05% higher than that of Windows to Windows but 8.07% lower than Windows to Linux.

Let’s move on to 64k 100%seq 100%read pattern, shall we? Here, Linux to linux performance is 23.20% lower than Windows to Windows and 26.72% lower than Windows to Linux. Under 64k 100%seq 100%write pattern, NFS on Linux to Linux performs 22.08% better than Windows to Windows but 59.95% lower than Windows to Linux.

And finally, 8k 50/50 Random/seq 70/30 Read/Write. Under this pattern, Linux to Linux performs 74.67% better than Windows to Windows and also beats Windows to Linux by 33.34%.

 As you can see, NFS performance when ESXi client with Windows Server 2016 connected to the server on Linux Ubuntu Server 17.10 is still higher than Linux to Linux under almost all testing patterns (except for 8k 50/50 Random/seq 70/30 Read/Write). Well, hope you find this article useful when you’ll be deciding on NFS as a storage for virtual machines on VMware ESXi. See you soon!