Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent read/write support #38

Open
skiwi2 opened this issue May 31, 2016 · 1 comment
Open

Concurrent read/write support #38

skiwi2 opened this issue May 31, 2016 · 1 comment
Labels
Important high value feature or optimization
Milestone

Comments

@skiwi2
Copy link

skiwi2 commented May 31, 2016

Hello, I've been using BitVec for both a single-threaded and a multi-threaded version of the Sieve of Eratosthenes algorithm. Please note that I'm a beginner in the Rust language, so I might be doing things wrong.

In the documentation I haven't found anything indicating that this structure supports concurrent access and putting it in an Arc<Mutex<BitVec>> seems like a terrible idea when using multiple threads for computations as then it is effectively the same or worse than using a single thread.

Have I missed a part of the documentation where it says that concurrency is supported?
If not, are there plans to support concurrency?

To me it would seem most logical to put the BitBlock under a Mutex<BitBlock>, but that is just my beginner's view on this matter. On the other hand I could also hand out slices of the full vector to each thread and in that case I wouldn't even need a mutex conceptually to access the values safely, not sure how this is done in Rust though.

@pczarn
Copy link
Contributor

pczarn commented Jun 19, 2019

Traditionally one would make an alternative implementation by the name ConcurrentBitVec. It's better to fork our code, especially given that concurrency is not the only tweakable aspect of storage, with other possible tweaks such as fixed length arrays, small-vec optimizations, etc.

One option is to have AtomicU32 as bit block. Then, with CAS loops, you may operate on individual bits.

Another option is to put a Mutex once every N bit blocks, possibly using parking_lot. You should take false sharing into consideration, so, most likely you would end up with one Mutex per 56 or 60 or 63 bytes (entire cache line minus a machine word or a byte).

I don't know enough about concurrency to describe what kind of tradeoffs are between these options.

@pczarn pczarn added the Important high value feature or optimization label Jul 1, 2024
@pczarn pczarn added this to the version 1.0 milestone Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Important high value feature or optimization
Projects
None yet
Development

No branches or pull requests

2 participants