Imagine you have a solution to a problem or a task, and now you need to evaluate the optimality of this solution from a performance perspective. The most obvious way is to use StopWatch like this:
However, there are several issues with this method:
- It is quite an inaccurate method, since the code being evaluated is executed only once, and the execution time can be affected by various side effects such as hard disk performance, not warmed-up cache, processor context switching, and other applications.
- It does allow you to test the application in Production mode. During compilation, a significant part of the code is optimized automatically, without our participation, which can seriously affect the final result.
- Your algorithm may perform well on a small dataset but underperform on a large one (or vice versa). Therefore, to test performance in different situations with different data sets, you will have to write new code for each scenario.
So what other options do we have? How can we evaluate the performance of our code properly? BenchmarkDotNet is the solution for this.
Benchmark setup
BenchmarkDotNet is a NuGet package that can be installed on any type of application to measure the speed of code execution. To do this, we only need two things :a class to perform the benchmarking code and a way to launch a runner to execute it.
Here’s what a basic benchmarking class looks like:
Let’s break down this class, starting with the attributes.
MemoryDiagnoser attribute collects information about the Garbage Collector’s operation and the memory allocated during code execution.
Orderer attribute determines the order in which the final results are displayed in the table. In our case, it is set to FastestToSlowest, meaning the fastest code appears first, and the slowest last.
RankColumn attribute adds a column to the final report, numbering the results from 1 to X.
We have added the Benchmark attribute to the method itself. It marks the method as one of the test cases. And the Baseline=true parameter says that we will consider the performance of this method to be 100%. And then we will evaluate other algorithm options in relation to it.
To run the benchmark, we need the second piece of the puzzle:the Runner. It is simple: we go to our Program.cs (in a console application) and add one line with BenchmarkRunner:
Then, we build our application in Production mode and run the code for execution.
Analysis of results
If everything set up correctly, then after running the application, we will see how BenchmarkRunner executes our code multiple times and eventually produces the following report:
Important: any anomalous code executions (those much faster or slower than the average) will be excluded from the final report. We can see the clipped anomalies listed below the resulting table.
The report contains quite a lot of data about the performance of the code, including the version of the OS on which the test was run, the processor used, and the version of .Net. But the main information that interests us is the last table where we see:
- Mean is the average time it takes to execute our code;
- Error—an estimation error (half of the 99.9 percentile);
- StdDev is the standard deviation of the estimate;
- Ratio – a percentage estimate of improvement or deterioration in performance relative to Baseline – the basic method that we consider as the starting point (remember Baseline=true above?);
- Rank – ranking;
- Allocated – the memory allocated during execution of our method.
Real test
To make the final results more interesting, let’s add a few more variants of our algorithm and see how the results change.
Now, the benchmark class will look like this:
Our focus now is on benchmarking. We will leave the evaluation of the algorithms themselves for the next article.
And here is the result of performing such benchmarking:
We see that GetYearFromDateTime, our starting point, is the slowest and takes about 218 nanoseconds, while the fastest option, GetYearFromSpanWithManualConversion, takes only 6.2 nanoseconds —35 times faster than the original method.
We can also see how much memory was allocated for the two methods GetYearFromSplit and GetYearFromSubstring, and how long it took the Garbage Collector to clean up this memory (which also reduces overall system performance).
Working with Various Inputs
Finally, let’s discuss how to evaluate the performance of our algorithm on both large and small data sets. BenchmarkDotNet provides two attributes for this: Params and GlobalSetup.
Here is the benchmark class using these two attributes:
In our case, the Size field is parameterized and affects the code that runs in GlobalSetup.
As a result of executing GlobalSetup, we generate an initial array of 10, 1000 and 10000 elements to run all test scenarios. As mentioned earlier, some algorithms perform effectively only with a large or small number of elements.
Let’s run this benchmark and look at the results:
Here, we can clearly see the performance of each method with 10, 1000 and 10000 elements: the Span method consistently leads regardless of the input data size, while the NewArray method performs progressively worse as the data size increases.
Graphs
The BenchmarkDotNet library allows you to analyze the received data not only in text and tabular form but also graphically, in the form of graphs.
To demonstrate, we will create a benchmark class to measure the runtime of different sorting algorithms on the .NET8 platform, configured to run three times for different numbers of sorted elements: 1000, 5000, 10000. The sorting algorithms are:
- DefaultSort – the default sorting algorithm used in .NET8
- InsertionSort – insertion sort
- MergeSort – merge sort
- QuickSort – quick sort
- SelectSort – selection sorting
The benchmark results include a summary in the form of a table and a graph:
BenchmarkDotNet also generated separate graphs for each benchmark (in our case, for each sorting algorithm) based on the number of sorted elements:
We have covered the basics of working with BenchmarkDotNet and how it helps us evaluate the results of our work, making informed decisions about which code to keep, rewrite or delete.
This approach allows us to build the most productive systems, ultimately improving user experiences.
Author – Anton Vorotyncev
Source link
lol