-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix dictionary hashing with pydantic models as keys #404
base: main
Are you sure you want to change the base?
Conversation
llama_deploy/services/workflow.py
Outdated
for key in context_dict: | ||
if isinstance(key, BaseModel): | ||
context_dict[get_qualified_name(key)] = context_dict[key] | ||
del context_dict[key] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I like this change. This is not idempotent (its changing the context dict without the user knowing, and not changing it back) -- tbh, probably better to just do hash(str(context_dict) + hash_secret)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, updated and added a test
|
I pushed a change that I think fixed the tests but I don't think I fixed the underlying issue - it seems like the hash is being computed by some other process that I can't find. The
but the tests fail if I allow that hash to be provided |
@masci I discovered the issue is that the pydantic dict validation can make some small changes to the structure, specifically it was changing a tuple to a sequence that was breaking the hash checks. By making the hash a computed field in the pydantic model we can ensure that the hash is always computed based on the validated dict. I also changed the hashing to use |
PR to address issue #392