We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The new cache should support declarative configuration
The text was updated successfully, but these errors were encountered:
Small side comment ... @jackgerrits Would it be possible to have a simpler interface for enabling cache in the chatcompletion client?
cached_model_client = OpenAIChatCompletionClient(model="gpt-4o", cache=True)
This way ChatCompletionClient caching can be reflected in the config for the client.
Sorry, something went wrong.
This means that each model client need to implement it. The point of the current design is separation of concerns and generalizability.
This is the case currently - itll be a model client's config nested with the config of the cache client.
If we allow a default cache store, then the current usage can be as simple as:
cached_model_client = ChatCompletionCache(OpenAIChatCompletionClient(model="gpt-4o"))
No branches or pull requests
The new cache should support declarative configuration
The text was updated successfully, but these errors were encountered: