You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my profiling experience of x.py (as shown at the sprint at PyCon'16) a lot of time was spent in locking the global dictionary due to contention.
Implementing a RW lock for an arbitraty dictionary is really hard because one would have to somehow implement a RW lock which can recurse in any way (e.g. R+W+R, R+R+W, etc.) due to the fact that comparison operator for dictionary keys can execute just anything (example might be an object which has overriden __eq__ method which inserts something in the same dictionary it was being used as a key - weird but possible).
But my profiling also suggests that most problems (at least in certain benchmark cases) are caused by contention on dictionaries of certain type, that is, on dictionaries with strings as keys.
Also when looking at dictobject.c implementation we can see that Python dicts have a special case for those exact same dicts which does not call custom compare, but rather compares memory directly.
That said, I think it's possible to implement more efficient locking for those objects using RW locks, and lock all other dicts as before.
I'll try prototyping that.
The text was updated successfully, but these errors were encountered:
From my profiling experience of
x.py
(as shown at the sprint at PyCon'16) a lot of time was spent in locking the global dictionary due to contention.Implementing a RW lock for an arbitraty dictionary is really hard because one would have to somehow implement a RW lock which can recurse in any way (e.g. R+W+R, R+R+W, etc.) due to the fact that comparison operator for dictionary keys can execute just anything (example might be an object which has overriden
__eq__
method which inserts something in the same dictionary it was being used as a key - weird but possible).But my profiling also suggests that most problems (at least in certain benchmark cases) are caused by contention on dictionaries of certain type, that is, on dictionaries with strings as keys.
Also when looking at
dictobject.c
implementation we can see that Python dicts have a special case for those exact same dicts which does not call custom compare, but rather compares memory directly.That said, I think it's possible to implement more efficient locking for those objects using RW locks, and lock all other dicts as before.
I'll try prototyping that.
The text was updated successfully, but these errors were encountered: