How to Benchmark C# Code Using BenchmarkDotNet | by Jamie Burns | Jul, 2022

This amazingly simple tool allows you to objectively measure and compare your code

A photo of different measuring devices
Photo by Dan-Cristian Pădureț on Unsplash

There will be times when the performance of your code really matters.

.NET has improved to such an extent that most of the time we don’t need to worry about it, and the code we write will already work well.

The problems come when you’re writing code that’s going to be run lots and lots of times — maybe thousands of times a second. When you’ve got code like this, then every millisecond counts, and it can be a little tricky finding out how to optimise it.

The first step in fixing a performance problem is diagnosing where the issue is, and for this, you need to be able to accurately measure the performance of your code.

Say hello to BenchmarkDotNet.

I recently came across this tool when writing up a prototype for a tree-traversal process, and I couldn’t decide which way of navigating the tree would be the quickest.

BenchmarkDotNet made it really easy to run a test for each approach I’d written, and objectively indicate which option I should pick.

Screenshot of a command line output from BenchmarkDotNet showing the results of a test run. It has columns for time, memory usage, etc., and rows for each individual test performed.
My final results for the different ways of processing this tree, calculated and displayed by BenchmarkDotNet

This is what I ended up with, and let me choose my ‘DictionaryRecursive’ approach, since the response times are consistently low (less than 6ms), despite the memory usage being higher. For my prototype, this was all I needed, and I got these results in just a few minutes.

Let’s go through what we need to do to run tests like this.

These tests require some code to run. In my case, I had different implementations of a pure method that could be run with different parameters, and I wanted to test how each performed with a range of parameters.

I used an interface to allow me to swap the implementations around.

I then implemented this method in a few different ways, so that I could compare the results.

Some implementations I knew would be slower than others, but I still wanted the actual figures to base my decision on. As it turned out, I was a little surprised by the results.

The benchmark project

Once you’ve got some code you want to test, create a new Console App, and install the BenchmarkDotNet NuGet package.

Next, create a new class (which I called Benchmark), and give it the [MemoryDiagnoser] attribute. I also added a [RankColumn] attribute, to include this in the results, but this is optional.

We add our individual benchmark tests within this class. These tests are simple methods that have got the [Benchmark] attribute applied.

Here I’ve added five benchmark tests. Inside each, they instantiate a new ‘approach’ class and call the Calculate method with some generated input.

Next, we need to update Program.cs to run the tests. This is done by calling BenchmarkRunner.Run<Benchmark>(); so that the benchmark tests are executed. I’m using .NET 6’s new top-level statements (which is why you’re not seeing any Main method here).

And that’s it, that’s all the code you need to write.

You might be tempted to give this Console App a run right away. If you do, you might get this error.

A screenshot of the command line output from the benchmark test. The error says: Assembly Nodes.Benchmark which defines benchmarks references non-optimized Nodes.API
The error you see when running the benchmark in a Debug configuration

The full message is this:

// Validating benchmarks:
Assembly Nodes.Benchmark which defines benchmarks references non-optimized Nodes.API
If you own this dependency, please, build it in RELEASE.
If you don’t, you can disable this policy by using ‘config.WithOptions(ConfigOptions.DisableOptimizationsValidator)’.
Assembly Nodes.Benchmark which defines benchmarks is non-optimized
Benchmark was built without optimization enabled (most probably a DEBUG configuration). Please, build it in RELEASE.
If you want to debug the benchmarks, please see

.NET makes a lot of performance optimisations when it builds something in a Release configuration, which isn’t there when you build it using Debug. It’s always a good idea to do any performance testing with the optimised code, so you’ll need to change how you run this benchmark project.

If you’re using Visual Studio 2022, you can just select the Release configuration from the toolbar and select ‘Start without Debugging’ (Ctrl+F5).

Screenshot of Visual Studio 2022 showing how to change the build configuration from Debug to Release
Select Release from the configuration options, and then ‘Start without Debugging’

Once set as Release, you can run your Console App. The benchmark tests will then begin, and you’ll start to see the results output. When the tests are complete, you get the final summary output which shows how long each approach took (on average) to run, along with memory usage.

Screenshot of the results of the benchmark tests. It shows the first 3 approaches (BasicListApproach, BasicListFilteredApproach andRecursiveLoopApproach) are much slower than the final 2 approaches (DictionaryApproach and DictionaryRecursiveApproach). However, the memory usage of the final 2 approaches are about double that of the first 3.
The results of the benchmark test

So, for my test we can clearly see that some of the approaches are quicker than others, but also some are more memory-intensive. This is all really useful data to be able to inform our decision as to which approach we should use, or whether a particular approach needs to be optimised (and benchmarked) further.

The one thing that surprised me about this is that my BasicListFilteredApproach was actually the slowest of all the approaches, even slower than the original BasicListApproach, which it was supposed to be improving upon.

That’s why this data is so important — without it, I would have been left with my assumptions and gut instinct, which, in this case, would have been incorrect. Regardless, this proves that the dictionaries are performing significantly better than the other ways, which is always good to know.

We’ve seen that we can easily benchmark C# code using BenchmarkDotNet. We can write up a range of different tests to be run and compare all the results to see how each performs. We’ve also seen that the benchmark tests need to be built under a Release configuration, otherwise your results will be incorrect.

As with any performance optimisation work, it’s likely to be an iterative process. My next step with this would be to look at the best-performing approaches and see if anything can be further improved until we get to a level of performance that we’re happy with.

Being able to get benchmark results out as quickly as this means that we’ll be able to measure future improvements, and know when we’ve hit this level.

News Credit

%d bloggers like this: