-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LLMs module using grafana-llm-app
#72
Conversation
This commit adds functionality that can be used to make requests to LLMs via the grafana-llm-app plugin. The initial commit just adds support for OpenAI and doesn't make any attempt to abstract over more than one LLM provider. It includes a function which can be used to stream chat completion results back to the caller. Very much WIP, especially the export structure which I can't seem to figure out. I'd like it if users could do something like... import { openai } from '@grafana/experimental/llms'; but I'm not sure if that requires a change to rollup?
"@grafana/data": "^10.0.0", | ||
"@grafana/runtime": "^10.0.0", | ||
"@grafana/ui": "^10.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These (and the devDependencies
) should be bumped in a separate PR really, I'll move them over.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in #73.
@@ -1,3 +1,4 @@ | |||
export * as llms from './llms'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really what I want. With this, users have to use this like so:
import { llms } from '@grafana/experimental';
// In a component
const enabled = await llms.openai.enabled();
I'd like users to be able to go
import { openAIEnabled: enabled, streamChatCompletions } from '@grafana/experimental/llms/openai';
// in a component
const enabled = await openAIEnabled();
Not sure what's required to make that happen though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enabled
here will work like a feature toggle almost? Is that idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's the idea 👍
/** | ||
* The role of a message's author. | ||
*/ | ||
export type Role = 'system' | 'user' | 'assistant' | 'function'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a closed type but might be expanded in future by OpenAI. Perhaps we should make it open somehow so we're not having to keep it up-to-date.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Throughout this module I've added types for the various request/response structures. I think it's unlikely that OpenAI will remove any existing parameters but they may always add more, which we'll need to keep up to date.
I've also just copied the docs from OpenAI's API docs, but they could also go out of date quite quickly...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looooooove that you added all the documentation in the code!!!!
…h e.g. a NetworkError This will help users debug connectivity or LLM related issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, Ben! LGTM!
This commit enables support for handling function calling by having 'chatCompletions' and 'streamChatCompletions' default to returning the entire chat completions response in the Observable, so that users can extract the 'function_call' object if they're using function calls. It also improves the docs on a ton of interfaces and functions since they're now exposed to users.
grafana-llm-app
This will match up with the latest version of the LLM app.
Looks good to me! Very exciting! |
This commit adds functionality that can be used to make requests to LLMs via the grafana-llm-app plugin. The initial commit just adds support for OpenAI and doesn't make any attempt to abstract over more than one LLM provider. It includes a function which can be used to stream chat completion results back to the caller.
Very much experimental, especially the export structure which I can't seem to figure out. I'd like it if users could do something like...
but I'm not sure if that requires a change to rollup? Help wanted.
We may want to add more React-centric helpers here (some hooks, maybe?) but this forms the basic functionality at least.
Tagging the ML people in case they want to chime in on the APIs, I'll add a few comments about design decisions.
There's an example of this being used here. The design doc for this idea is here.
For now the aim is to make this (and the LLM plugin available for participants of the Hackathon to make it easier for them to use LLMs in their projects. Hopefully there's not too much concern putting this in the experimental package!