-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve health check to return more granular details #85
Conversation
The Dashboard AI team have requested that the plugin health check returns more details about whether the OpenAI functionality is configured, and whether that configuration is valid. This PR adds it, and similar functionality for the vector service. It's backwards incompatible with past health checks unfortunately, but the llmclient package does include a change which should be compatible with both the current and previous version of the plugin. We're definitely due a refactor of the way we handle OpenAI config and requests, but I think that will be part of #80.
TODO:
|
The app is recreated by the plugin management system if settings change, so the cached value will be discarded when the user updates the settings to fix the health check error. Still unsure if this is an ideal way of doing things, but it speeds up health checks significantly while saving on OpenAI costs.
44a4448
to
3967408
Compare
Error: "", | ||
Models: map[string]openAIModelHealth{ | ||
"gpt-3.5-turbo": {OK: true, Error: ""}, | ||
"gpt-4": {OK: false, Error: `unexpected status code: 404: {"error": "model does not exist"}`}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we return the errors like foo: bar: {error: baz}
it is hard to parse in the frontend. Could we just return an object? is this any type of convention?
I'd love to return to the frontend as much information as possible from OpenAI since this is part of the configuration phase.
This is the best we can do when the request fails when using the query:
Also disable the button if we're waiting for the health check.
6922f10
to
2d0d880
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ The experience configuring the plugin is great. The right place to have debug info.
✅ The feature still working as expected
✅ The health check works in every scenario
Fix expected field names in type in health check details
The Dashboard AI team have requested that the plugin health check returns more details about whether the OpenAI functionality is configured, and whether that configuration is valid. This PR adds it, and similar functionality for the vector service.
It's backwards incompatible with past health checks unfortunately, but the llmclient package does include a change which should be compatible with both the current and previous version of the plugin.
We're definitely due a refactor of the way we handle OpenAI config and requests, but I think that will be part of #80.