-
-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bincode 2 performance tracking #618
Comments
Bincode 1 essentially does Bincode 2 aims to be usable in Because of this, in bincode 2 we have to do let mut vec = vec![0u8; len];
reader.read(&mut vec); // essentially `vec.copy_from_slice(&[u8]);` Where in bincode 1 we could do let vec = reader.read_vec()?; // essentially `&[u8].to_vec()` Which is why bincode 1 is 20% faster in this specific case. Funnily enough after #667 reading One possible solution is to have the reader be able to optimize for |
I was curious whether v2 brings performance improvements as it doesn't rely on serde. So I decided to do a quick criterion benchmark and was kinda surprised when I saw the results.
As you can see on the screenshot below encoding is around 100x slower than v1 and decoding around 5-8x slower. I'm personally using bincode over other formats as I don't need human readable (de)serialization of my data but rather want it to be as fast as possible as my entire applications performance depends on it. So it would be nice if v2 doesn't have these performance drawbacks. Of course one could continue use the serde implementation (although I haven't tested the performance of serde in v2) but such a performance drawback in a crate like bincode should be considered as bug in my opinion, so I created this issue.
You can find the code here: https://github.com/JojiiOfficial/bincode_bench
The text was updated successfully, but these errors were encountered: