Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory optimization #39 #40

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open

Memory optimization #39 #40

wants to merge 6 commits into from

Conversation

kfrancis
Copy link
Contributor

@kfrancis kfrancis commented Oct 3, 2024

No description provided.

Instead of allocating new byte arrays each time you perform a cryptographic operation (such as signing, encryption, or converting data to/from Base64), you can use a buffer pool. A buffer pool maintains a set of preallocated buffers that can be reused, reducing the need for frequent allocations. In .NET, the ArrayPool<T> class is ideal for this purpose.
The code has been updated to use XmlReader and XmlWriter for efficient XML parsing and writing. This change improves the performance of loading and saving licenses by ignoring unnecessary whitespace during reading, and enabling indentation during writing.
Introduced a static XmlReaderSettings object with pre-configured settings to optimize XML parsing in the License class. This change reduces unnecessary processing by ignoring whitespace, comments, and processing instructions during XML reading. The new settings are now used when creating an XmlReader instance for loading licenses from a stream.
@kfrancis
Copy link
Contributor Author

kfrancis commented Oct 3, 2024

While the improvements are small, it is still an improvement (#39):

image

Here’s a breakdown of what the benchmark tells us:

  1. Mean Execution Time
  • Old Method: 81.73 µs
  • New Method: 80.01 µs
  • The new method is about 2% faster (Ratio of 0.98), though the actual performance improvement is quite small (around 1.7 µs faster on average).
  1. Memory Allocations
  • Old Method: 68.07 KB
  • New Method: 67.28 KB
  • The new method uses slightly less memory (by about 0.8 KB, or 0.99x the allocations of the old method). This is a minor improvement but still suggests better resource management.
  1. Standard Deviation (StdDev)
  • The standard deviation is lower in the new method (1.322 µs vs. 2.511 µs), meaning the new method may be more consistent in its execution times.
  1. Gen0 Collections
  • Both methods perform a similar number of Gen0 garbage collections per run (approximately 3.6), indicating that the memory optimizations in the new method don’t drastically reduce the frequency of garbage collections but do provide a small gain.

Conclusion

The improvements are minimal, but the fact that you’ve reduced both time and memory usage means the changes are effective, albeit small. It suggests that unless you need to further optimize for more significant performance gains, the new method is slightly more efficient, especially in scenarios where these small improvements can compound over many operations.

Also, no change in tests:
image

Comment on lines 58 to 61
using (var rng = RandomNumberGenerator.Create())
{
rng.GetBytes(salt, 0, 16);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could tighten this line up a bit and get rid of the using with a change to using the static Fill() method:
RandomNumberGenerator.Fill(salt.AsSpan(0,16));

@jshergal
Copy link

jshergal commented Nov 4, 2024

Hi @kfrancis,
Out of curiosity, can you share your benchmark code? I am assuming the benchmark was done over the ToEncryptedPrivateString() method. Also, what target did you benchmark (net60, net80)? Again, just out of personal curiosity.

@kfrancis
Copy link
Contributor Author

kfrancis commented Nov 7, 2024

Unfortunately I don't think I kept the benchmark.net code, but I imagine I was trying to do net48, net80 as some of our code lives in both.

@jshergal
Copy link

jshergal commented Nov 7, 2024

Hi @kfrancis,
I was working on setting up a benchmark when I realized that there is a potential issue with the change that is being suggested here regarding the key encryption. The old code was using salt buffer of 16 bytes. The new code is using a buffer that will contain 16 bytes of random data and potentially be followed by other data.

The issue is with the ArrayPool. When you rent an array, the only guarantee is that the array is at least as long as you asked for, but could potentially be longer. So for example, we could ask for an array of length 16, but get back an array of length 128, or 1024. We are then passing that array down into the BouncyCastle code.

Its a subtle problem, and I missed it myself on the first glance, but in thinking about it further I realized that this creates a fundamental behavior change. I believe the idea is for the salt to be explicitly 16 bytes long, so that change probably needs to be removed from this PR. I can't speak for the XML changes you made and I have no connection with the project (other than I had an original fork several years ago to support .NET 5.0).

The ideal solution for the key encryption is when (and if) BouncyCastle adds more support for spans, then in the .NET 6.0+ targets rather than creating a new array, we could use stackalloc for the salt buffer, but for now that is not supported.

secureRandom.SetSeed(secureRandom.GenerateSeed(16)); //See Bug #135
secureRandom.NextBytes(salt);
// Rent a buffer for the salt (16 bytes for the salt)
var salt = ArrayPool<byte>.Shared.Rent(16);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential issue here as the salt array is no longer guaranteed to be 16 bytes long.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants