Hyperconvergence has dramatically transformed data center landscape over the past few years. New technologies are being developed, good old ones are being improved… We live in exciting times! And as data centers are becoming more reliable and powerful, it is important to get more out of the hardware in use: nobody likes to leave money on the table! Intel, Mellanox and StarWind have teamed up to develop a Hyper-V highly available cluster where you’ll get awesome performance without compromising ease of manageability of the environment. This article discusses the measurements in brief, showcasing the recent results.
Setting up a failover cluster is a thing that admins must do. To build such cluster, you need to configure shared storage. And, there are a lot of ways to do that. Today, I’d like to discuss how to build a Windows Failover Cluster using a virtual SAN solution (StarWind Virtual SAN) as a shared storage provider.
In the previous parts, I’ve shown you the process of configuring NFS and iSCSI protocols between our servers. So now, we’ve got everything ready for running our performance tests and finally finding out which network protocol is faster as a storage for virtual machines on VMware ESXi: NFS or iSCSI.
So to benchmark the iSCSI performance, I’ve created the StarWind device on the server and connected it to the ESXi host over the iSCSI protocol. As to OS for running further tests, I’ve used Windows Server 2016.
Cheers friends, not so long ago we’ve run through the process of configuring an NFS disk and connecting it to the VMware host. What we’re gonna do is measure and compare the performance of NFS and iSCSI network protocols to see which one is more suitable for building a virtualized infrastructure. So, in this part, we’ll create an iSCSI device and connect it to the VMware ESXi host.
Hi there! There have been pretty much debates over which network protocol is better: NFS or iSCSI when building a virtualization infrastructure. Some experts argue that iSCSI gives better performance and reliability due to block-based storage approach while others go in favour of NFS citing management simplicity, large data stores and the availability of cost-saving features like data deduplication on some NFS arrays.
Anyway, we’re not here for polemics but to see which protocol is better for your production environment, meaning, which one really provides higher performance for your mission-critical applications. That’s what we all want, right?
Just to make it clear, the whole project will be divided into three parts: configuring NFS, configuring iSCSI, and the testing itself.
So, first things first. In this first chapter, I’ll guide you through the process of configuring and preparing the NFS protocol for further testing.
So, as Michael Buffer uses to say: “Let’s get ready to rumble!”.
Subscribe to my posts
- Hyper-V Replica
- PowerShell wizard script: Configure Hyper-V Replica in different scenarios (domain, workgroups, and mixed option)
- Azure Site Recovery (ASR)
- Migrating to the cloud is easy. My experience of choosing P2V converters.
- Deploying a Windows Server 2019 S2D Cluster using Azure Resource Manager Templates