In this article, I’d like to compare results of the previous two and find out whether SQL Failover Cluster Instance (FCI) can provide you two times higher performance than SQL Server Basic Availability Groups (BAG).
Can SQL Server Failover Cluster Instance run twice as fast as SQL Server Basic Availability Groups in 2-node cluster? Part 3: Comparison time!
Can SQL Server Failover Cluster Instance run twice as fast as SQL Server Basic Availability Groups in 2-node cluster? Part 2: Studying FCI performance
In my previous article, I measured SQL Server Basic Availability Groups (BAG) performance. This, as it comes from the name, addresses SQL Cluster Failover Cluster Instance (FCI) performance. I expect SQL Server FCI to exhibit two times higher performance than BAG. Before I start, I’d like to tell you one important thing about this measurement. SQL Server FCI database resides on a StarWind virtual device. Why did I choose StarWind? Because I got their NFR license some time ago and decided to give this software-defined storage solution a shot. Let’s just hope that it won’t limit SQL Server FCI performance.
Can SQL Server Failover Cluster Instance run twice as fast as SQL Server Basic Availability Groups in 2-node cluster? Part 1: Studying BAG performance
I thought: “Hey, why not write an article about BAG performance?” Later, I realized that you need to compare this performance to something else, right? So, I decided to add SQL Server Failover Cluster Instance (FCI) performance measurements. Maybe, I’ll add some SQL Server Availability Groups (AG) measurements at the end; but, let’s see first whether SQL Server FCI can run twice as fast as SQL Server BAG. In this study, I measured BAG performance alone. Now, as we know the scope of the article, let’s move on!
This post addresses Hyper-V live migration – the topic which any admin faces with at some point. In my salad days of working as an admin, Hyper-V live migration was a saving grace, so I decided to write an article about it. In this article, I want to cover some live migration and migration wizard settings that ensure maximum performance of this process.
Some time ago, I wrote an article about backup storage media. Today, I’d like to talk about secondary storage. Before I move on, I want to clarify what I mean by “secondary storage” here, just to make sure that we are on the same page. Secondary storage is the storage where the actively used data resides. It can be both some local storage like SAN or NAS, or some public cloud hot tier. Well, it’s absolutely true that you can use disk arrays too, but let’s think of them today just as NAS-like servers packed with many disks, ok? That’s entirely up to you “which side you are on”, and there’s no “one-size-fits-all” solution. NAS, SAN, and public cloud storage… Whatever secondary storage you choose, it has own pros and cons. I discuss them in this article.
From day to day, admins troubleshoot issues remotely. And, pretty often, they cannot count on another guy who helps them to enable Remote Desktop (RD) on the remote host. There may also be the case when you loose the access to RD on another computer for some reason, and there’s no one in the remote office who can help you. Whatever, I hope you got the point. What do you need to do? Sure, you can just ask a fellow admin to enable RD on the remote host and wait a bit, but what if that’s something really urgent and you are to fix that issue in the middle of the night? Let’s think through what you can do in that case.
Sometimes, guys running home labs do not have licenses for Remote Desktop Services (RDS). Well, that’s not a big deal, you know, because Microsoft provides the 120-day grace period for the platform! However, one day the time runs out and RDS server breaks all the client connections. That day, admins are to choose between reinstalling the server and cheating a bit to reset the 120-day RDS grace period.
In my today’s topic, I discuss why PowerShell behaves like that. Specifically, I shed light on why you cannot run scripts or access a computer on a different domain. Also, I’ll take a closer look at how some cmdlets work.
In fact, losing even a small portion of your data is not fun. Especially, if you did not back them up for some reason. So, in today’s topic, I’d like to talk about backup storage media. I’ll review the most popular of them and give some ideas about how to pick the appropriate one for your backups. I know that the choice of media strongly depends on the volume of data to back up, environment peculiarities, and admin’s preferences. But, while looking for the backup storage solution we all also consider some general things like price, resulting environment scalability, and media reliability. That’s what I am going to highlight today. So, today I write my ideas on how to pick the backup storage solution that may be a perfect fit for your environment.
Getting more storage capacity for data or applications is not a problem. The question is: how fast your apps can run? That’s why performance becomes a number 1 demand for the majority of system administrators. So it’s not a big deal finding an article on improving Hyper-V and VMs performance, the challenge is to get an up-to-date info and modern insights. That’s why in this post, I’m gonna give some advice on boosting your Hyper-V infrastructure performance – from host to virtual machine and the overall cluster optimization. This might come in handy when building a new Hyper-V based environment or even improving the existing one. Let’s put the pedal to the metal!
Subscribe to our posts
- How is NVMe-oF doing? Part 2: Chelsio NVMe-oF Initiator + Linux SPDK NVMe-oF Target
- Re-investigating performance of SQL Server Basic Availability Groups on Storage Spaces. Why You Should Always Enable Read-Only Routing
- How is NVMe-oF doing? Part 1: Linux NVMe-oF Initiator + Linux SPDK NVMe-oF Target
- Setting up a Windows Failover Cluster for a home lab
- CAN SQL SERVER FAILOVER CLUSTER INSTANCES RUN ON S2D TWICE AS FAST AS SQL SERVER BASIC AVAILABILITY GROUPS ON STORAGE SPACES? SUMMARY