Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LLMs module using grafana-llm-app #72

Merged
merged 8 commits into from
Aug 3, 2023
Merged

Add LLMs module using grafana-llm-app #72

merged 8 commits into from
Aug 3, 2023

Conversation

sd2k
Copy link
Contributor

@sd2k sd2k commented Jul 27, 2023

This commit adds functionality that can be used to make requests to LLMs via the grafana-llm-app plugin. The initial commit just adds support for OpenAI and doesn't make any attempt to abstract over more than one LLM provider. It includes a function which can be used to stream chat completion results back to the caller.

Very much experimental, especially the export structure which I can't seem to figure out. I'd like it if users could do something like...

import { openai } from '@grafana/experimental/llms';

but I'm not sure if that requires a change to rollup? Help wanted.

We may want to add more React-centric helpers here (some hooks, maybe?) but this forms the basic functionality at least.

Tagging the ML people in case they want to chime in on the APIs, I'll add a few comments about design decisions.


There's an example of this being used here. The design doc for this idea is here.

For now the aim is to make this (and the LLM plugin available for participants of the Hackathon to make it easier for them to use LLMs in their projects. Hopefully there's not too much concern putting this in the experimental package!

sd2k added 2 commits July 26, 2023 14:22
This commit adds functionality that can be used to make requests
to LLMs via the grafana-llm-app plugin. The initial commit just
adds support for OpenAI and doesn't make any attempt to abstract
over more than one LLM provider. It includes a function which can
be used to stream chat completion results back to the caller.

Very much WIP, especially the export structure which I can't seem
to figure out. I'd like it if users could do something like...

    import { openai } from '@grafana/experimental/llms';

but I'm not sure if that requires a change to rollup?
Comment on lines +29 to +31
"@grafana/data": "^10.0.0",
"@grafana/runtime": "^10.0.0",
"@grafana/ui": "^10.0.0",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These (and the devDependencies) should be bumped in a separate PR really, I'll move them over.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in #73.

@@ -1,3 +1,4 @@
export * as llms from './llms';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really what I want. With this, users have to use this like so:

import { llms } from '@grafana/experimental';

// In a component
const enabled = await llms.openai.enabled();

I'd like users to be able to go

import { openAIEnabled: enabled, streamChatCompletions } from '@grafana/experimental/llms/openai';

// in a component
const enabled = await openAIEnabled();

Not sure what's required to make that happen though.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enabled here will work like a feature toggle almost? Is that idea?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's the idea 👍

/**
* The role of a message's author.
*/
export type Role = 'system' | 'user' | 'assistant' | 'function';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a closed type but might be expanded in future by OpenAI. Perhaps we should make it open somehow so we're not having to keep it up-to-date.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Throughout this module I've added types for the various request/response structures. I think it's unlikely that OpenAI will remove any existing parameters but they may always add more, which we'll need to keep up to date.

I've also just copied the docs from OpenAI's API docs, but they could also go out of date quite quickly...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looooooove that you added all the documentation in the code!!!!

src/llms/openai.ts Outdated Show resolved Hide resolved
src/llms/openai.ts Outdated Show resolved Hide resolved
@sd2k sd2k marked this pull request as ready for review July 27, 2023 18:02
@sd2k sd2k requested review from sunker, zoltanbedi and jackw July 28, 2023 08:03
src/llms/openai.ts Outdated Show resolved Hide resolved
…h e.g. a NetworkError

This will help users debug connectivity or LLM related issues.
Copy link

@baldm0mma baldm0mma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, Ben! LGTM!

@sd2k sd2k added the enhancement New feature or request label Aug 1, 2023
This commit enables support for handling function calling by having
'chatCompletions' and 'streamChatCompletions' default to returning the
entire chat completions response in the Observable, so that users can
extract the 'function_call' object if they're using function calls.

It also improves the docs on a ton of interfaces and functions since
they're now exposed to users.
@sd2k sd2k changed the title Add LLMs module using grafana-llm-app Add LLMs module using grafana-llm-app Aug 2, 2023
This will match up with the latest version of the LLM app.
@codeincarnate
Copy link
Contributor

Looks good to me! Very exciting!

@codeincarnate codeincarnate merged commit e6ec78b into main Aug 3, 2023
1 check passed
@sd2k sd2k deleted the llm-openai branch August 4, 2023 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants