Smooth performance.

Airtight security.

Customizable systems.

VIP support.

3 Example Configurations for CTFP within Blackbaud CRM SQL Servers

Feed Me, Seymour!


We’ve been discussing some of Blackbaud CRM’s monstrous appetites. Among them is it’s never-ending thirst for RAM, due to the ad hoc workload it faces.

But if you really want to get a leash on things, there’s another habit that you’re going to be fighting, time and time again, if you don’t configure SQL Server correctly. This is the precarious balancing act of Concurrency, or how much work you’re doing at the same time.

Improperly configured, you could have easy jobs being handled in a complex, resource taxing way. At the same time, your truly difficult tasks have to wait in line, with all the easy work that naturally piles up. If that doesn’t sound very efficient, it’s because it isn’t.

Want to know more? Let’s explore!

The Core of the Issue


For this discussion, we are going to be focusing on processing, at a particular end of your Blackbaud CRM. If you truly want to get control over the performance of Blackbaud CRM, at it’s heart is going to be the OLTP end of SQL, or On Line Transaction Processing. Faster OLTP means a faster response for end users of Blackbaud CRM.

So the balancing act starts with keeping processing speed as high as possible, with the proper amount of cores to handle the workload when it can be split. This is where a lot of mistakes are made.

Everyone seems to know that a higher base clock for the processor is going to equate to better performance. Where we see organizations sacrificing speed is in how many cores they provision. Processing speed takes a back seat to add more “cylinders” to the engine, with the impression being that a big cluster of cores will be able to make up for the drop in GHz.

For the best end user experience for Blackbaud CRM, you want AT LEAST 3.2 GHz processing. If that’s hard to get from your current Public Cloud Provider, there’s a few good reasons. They are working with commodity hardware in bulk, so lots of cores from lots of machines at a slower speed is going to be what they can offer.

Your Production OLTP SQL Server should have 8 to 16 cores. We’ve seen plenty of Production SQL Servers for Blackbaud CRM with more than this, sometimes up to 40 cores, and they all had a few things in common:

1. Generally, the processing speed takes a hit. These usually run around 1 GHz below what we recommend for Blackbaud CRM, typically because of the constraints of Public Cloud Providers, or as a compromise to get more cores.

2. In the compromises department, most were RAM starved.

3. In general, they all suffer from persistent, and hard-to-pin-down performance issues. This is likely because of all the work it takes to delegate across that many cores.


Touching on that last note above, there is a cost to spreading work across cores. Once the workload is split up, it takes time at the processor to stitch the work back together again. Breaking it up makes each piece “easier” to digest, but putting the results back together again IS work.

Having some cores in reserve doesn’t hurt though, right?

This is another mistake waiting to be made: The more cores you have in a system, the more work it is to spread it across them. So those additional cores could actually be hurting things.

With all of this in mind, we want faster processing, across ENOUGH cores, for Blackbaud CRM’s OLTP SQL to run at it’s best, and we need that work split up efficiently.

How do we control when SQL goes parallel across the cores? There’s a setting in SQL that will control it, and by default, it’s not doing you any favors.

Paying the Toll: CTFP


When SQL Server is handling a query, the query is assigned a cost. Below this cost, it stays on a single core. Above this cost, and it goes parallel, across the cores, up to a maximum controlled by a different setting, MAXDOP.

This is called Cost Threshold for Parallelism, or CTFP.

Understanding this setting and how it can impact Production SQL OLTP is key to getting a handle on your performance.

If a complex query comes along, and the work is spread out across multiple cores, this is good, because splitting up the work made it easier to tackle.

If a simple query is split up across the cores, this is bad, because there is processor time spent putting the work back together again.

Plus, it wasn’t necessary if it could have been efficiently handled by a single core. We want simple queries to stay on a single core, they process more quickly this way.

To more fully explore the topic, let’s look at some example configurations:

1. Default



Results: Nearly everything is spread across the cores.

This is the default setting for SQL Server, and it doesn’t bode well if it isn’t changed. Just about any query is going to be spread out across all the different cores, require time to get reassembled, and all of your cores are going to be busy with work, great or small, complex or simple.

The issues could manifest a few different ways. You might see something consistent, like a slow down during the busiest parts of the day, when you have the most end users. It could be more intermittent, like periodic slow downs caused by a particularly chewy report taxing SQL Server’s resources.

The point being: This is a very low cost to access additional cores.

2. The Other Direction



Results: NOTHING goes parallel!

I’m not grabbing this number out of thin air. I wanted to bring it up because it’s a common enough misconfiguration that it’s worth mentioning. We’ve run across systems where CTFP was set this high, and when we asked about it, we were advised of something interesting.

ALLEGEDLY, it was set this way on recommendation from someone that ALLEGEDLY once worked for Blackbaud. Why not use a winning formula, right?

Looking at it from Blackbaud’s perspective, this would make sense: They are hosting TONS of clients on the same SQL Server, and this makes sure nobody is going to hog all the cores.

KEEP IN MIND: We have no insider knowledge to know, for a fact, that Blackbaud has CTFP set this high within their own hosted environment. This is just our thought process behind why they might, and why it wouldn’t apply to a system hosting just YOUR Blackbaud CRM.

3. Striking a Balance



Results: Complex queries go parallel, simple ones do not.

This is our personal starting point for CTFP when it comes to the clients that we host. Simple, low cost queries stay on a single core, and are thus completed more quickly, increasing concurrency. But when a large, complex query comes along, additional horsepower is available whenever it’s cost exceeds 50.

This isn’t a magic number! Every deployment is unique, and every organization is going to be dealing with differing workloads. In our experience, you shouldn’t go any lower than 25, with 100 being our absolute upper limit. To truly tune this setting, you will need to perform some testing, and adjust according to results.

In Conclusion

So to re-cap, at the heart of your struggle for Better performance is the pursuit of better OLTP SQL performance, as this is key to responsiveness for end users. OLTP will be happiest when it has a LOT of RAM, and when it’s tuned to spread it’s work across multiple cores only when it’s necessary. Testing different CTFP configurations at the OLTP SQL Server is going to be your ticket, if it’s available.

Not every organization has the option to break up SQL duties the way we do at Concourse Hosting. Your SQL Server could very well be performing other SQL related duties for your organization, and these configurations may or may not make sense based on everything else that your Server has to perform.

This level of separation and dedication is something that’s possible at Concourse because we focus exclusively on hosting Blackbaud CRM. Our clients have a dedicated machine to handling just the OLTP-SQL duties for Blackbaud CRM, while your on premise SQL Sever can keep serving it’s current applications.