Hyperconvergence has dramatically transformed data center landscape over the past few years. New technologies are being developed, good old ones are being improved… We live in exciting times! And as data centers are becoming more reliable and powerful, it is important to get more out of the hardware in use: nobody likes to leave money on the table! Intel, Mellanox and StarWind have teamed up to develop a Hyper-V highly available cluster where you’ll get awesome performance without compromising ease of manageability of the environment. This article discusses the measurements in brief, showcasing the recent results.
Windows Admin Center (WAC) is a locally-deployed, browser-based management tool that provides you with the full control over your Windows Server environment. The nice thing is, it does not push you to Azure or any other cloud, so it works for you even if you do not feel that enthusiastic about public cloud.
It is the fourth part of my NVMe-oF Initiators’ performance study. Before, I tested NVMe-oF initiators developed by Linux, Chelsio (LINK) and StarWind (LINK). Here, the battle ends: which NVMe-oF initiator delivers the highest performance and which one Windows admins should use?
Finally, I got the hands-on experience with StarWind NVMe-oF Initiator. I read that StarWind did a lot of work to bring NVMe-oF to Windows (it’s basically the first solution of its kind), so it’s quite interesting for me to see how their initiator works! In today’s post, I measure the performance of NVMe drive presented over Linux SPDK NVMe-oF Target while talking to it over StarWind NVMe-oF Initiator.
I have to stop blogging because my wife gives birth soon, so I want to spend more time with my family rather than at my office or lab. Maybe, I’ll start writing again someday, who knows.
While some OS-s built on Linux kernel support NVMe-oF, Windows just does not. No worries, there are some ways to bring this protocol to a Windows environment! In this article, I investigate whether presenting an NVMe drive over RDMA with Linux SPDK NVMe-oF Target + Chelsio NVMe-oF Initiator provides you the perfomance that vendors of flash list in their datasheets.
Re-investigating performance of SQL Server Availability Groups on Storage Spaces. Why You Should Always Enable Read-Only Routing
In this post, I am going to take a closer look at the impact of read-only routing on SQL Server Availability Groups performance.
I measured SQL Server Availability Groups (AG) performance before. And, a guy from Reddit recommended enabling read-only routing to achieve higher performance.
Considering how often I see NVMe-related titles over the Internet, I consider NVMe-oF to be still a hot topic. That’s why I decided to pitch in 🙂
Setting up a failover cluster is a thing that admins must do. To build such cluster, you need to configure shared storage. And, there are a lot of ways to do that. Today, I’d like to discuss how to build a Windows Failover Cluster using a virtual SAN solution (StarWind Virtual SAN) as a shared storage provider.
Can SQL Server Failover Cluster Instance run on S2D twice as fast as SQL Server Availability Groups on Storage Spaces? Summary
Since I’m done with measuring SQL Server Basic Availability Groups (BAG) on Storage Spaces and SQL Server Failover Cluster Instances (FCI) on Storage Spaces Direct (S2D) performance, I can write the most interesting part in this series: performance comparison.
Subscribe to my posts
- Hyper-V Replica
- PowerShell wizard script: Configure Hyper-V Replica in different scenarios (domain, workgroups, and mixed option)
- Azure Site Recovery (ASR)
- Migrating to the cloud is easy. My experience of choosing P2V converters.
- Deploying a Windows Server 2019 S2D Cluster using Azure Resource Manager Templates