Several tips and hints to full-throttle your Hyper-V performance

Getting more storage capacity for data or applications is not a problem. The question is: how fast your apps can run? That’s why performance becomes a number 1 demand for the majority of system administrators. So it’s not a big deal finding an article on improving Hyper-V and VMs performance, the challenge is to get an up-to-date info and modern insights. That’s why in this post, I’m gonna give some advice on boosting your Hyper-V infrastructure performance – from host to virtual machine and the overall cluster optimization. This might come in handy when building a new Hyper-V based environment or even improving the existing one. Let’s put the pedal to the metal!


Hot adding/removing memory in Hyper-V 2016: a closer look at the feature

Today, I’ll talk about a thing that any sysadmin running Hyper-V VMs does (or still dreams about) while managing infrastructure resources: hot modifying assigned to VM memory amount. I’ll discuss not only the feature itself but also how it works on different OS and its impact on the environment stability.

All of us keep an eye on resource consumption within our environments. If a VM needs extra RAM to have the job done, we provide it with some, right? And, we usually run many VMs on our servers each with own purpose and configuration. That’s, actually, why changing the amount of assigned to a VM memory without rebooting it may come in handy. Also, many guys run some parts of their environments on Windows while having other parts run on something from Linux family. Looks pretty hectic in terms of management, doesn’t it?


Performance or protection? How Microsoft patches against Meltdown and Spectre influence CPU, RAM and Disks performance

In today’s topic, I’d like to talk about the Meltdown and Spectre vulnerabilities. But not about the harm they cause, this has been covered widely in numerous articles, but how Microsoft patches intended to protect you from the vulnerabilities, affect (if they do) the hardware performance. Before we take a deep dive into the tests and numbers, let me tell a few words about Meltdown and Spectre and outline the testing scope to make sure we speak one language.


How to save disk space in Clustered File Servers on Windows Server 2016 using Data Deduplication feature

So we all know about the benefits you get with data deduplication technology. Long story short, it minimizes server application’s storage consumption by reducing the amount of redundant data stored on the disk. As the result, you should get more space for your VMs and applications. How does it work for a file server? Well, that’s what I’m gonna test here.


The idea Behind Node Fairness in Hyper-V: How it works and why you need it?

For quite a long time, System Center Virtual Machine Manager (SCVMM) has a feature called Dynamic Optimization. Its main goal is to automatically rebalance VMs between the participating cluster nodes in case the placement is unequal. Now, this feature has partially became available in Windows Server 2016 in the form of Node Fairness. It balances the workloads among the hosts in a Hyper-V Failover Cluster and automatically live migrates guests from an overloaded node to  a less busy one with zero downtime.

Node Fairness goes embedded in Windows Server 2016 and is intended for deployments without SCVMM. SCVMM Dynamic Optimization delivers more versatile functionality than Node Fairness. Regarding this fact, Dynamic Optimization is recommended for balancing workloads among the cluster hosts. However, to use this feature, you need an additional license from the main operating system.

Now that we know what Node Fairness is, let’s take a look at how this service works.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 4: testing NFS on Linux

In the previous article, I’ve measured the performance of NFS vs iSCSI to find out which network protocol is faster as a storage for virtual machines on VMware ESXi. Well, iSCSI beats NFS under all testing patterns. Additionally, I’ve evaluated and compared the performance of NFS client connected to Linux (Ubuntu Server 17.10 distributive) and to Windows Server 2016. According to the results, NFS server performance on Linux was higher than that on Windows.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 3: test results

In the previous parts, I’ve shown you the process of configuring NFS and iSCSI protocols between our servers. So now, we’ve got everything ready for running our performance tests and finally finding out which network protocol is faster as a storage for virtual machines on VMware ESXi: NFS or iSCSI.

So to benchmark the iSCSI performance, I’ve created the StarWind device on the server and connected it to the ESXi host over the iSCSI protocol. As to OS for running further tests, I’ve used Windows Server 2016.


How Can I Replace a Failed Physical Disk on Storage Spaces Direct in Windows Server 2016?

So, we all know about Microsoft’s Storage Spaces Direct (S2D to put it simple) by now. It’s the feature introduced in Microsoft Server 2016 (Datacenter Edition) that pools together server’s storage allowing to build…that’s right: highly available and easily scalable software-defined storage systems. In this article, I’m gonna talk about not as much about its fault-tolerance characteristics themselves, but some hands-on experience, namely: how to replace a failed disk.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 2: configuring iSCSI

Cheers friends, not so long ago we’ve run through the process of configuring an NFS disk and connecting it to the VMware host. What we’re gonna do is measure and compare the performance of NFS and iSCSI network protocols to see which one is more suitable for building a virtualized infrastructure. So, in this part, we’ll create an iSCSI device and connect it to the VMware ESXi host.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 1: configuring NFS

Hi there! There have been pretty much debates over which network protocol is better: NFS or iSCSI when building a virtualization infrastructure. Some experts argue that iSCSI gives better performance and reliability due to block-based storage approach while others go in favour of NFS citing management simplicity, large data stores and the availability of cost-saving features like data deduplication on some NFS arrays.

Anyway, we’re not here for polemics but to see which protocol is better for your production environment, meaning, which one really provides higher performance for your mission-critical applications. That’s what we all want, right?

Just to make it clear, the whole project will be divided into three parts: configuring NFS, configuring iSCSI, and the testing itself.

So, first things first. In this first chapter, I’ll guide you through the process of configuring and preparing the NFS protocol for further testing.

So, as Michael Buffer uses to say: “Let’s get ready to rumble!”.