-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Gemini #17
Comments
I'd love to take a stab at this, if you don't mind! |
Hey @Milksheyke! Great to hear that you are interested in contributing to the project 🙂 At the moment I don't have any contribution guide, unfortunately. We can maybe discuss how to implement it here. I already added support for Gemini on LangChain.dart, so the integration should be straightforward. We may need to think about how we adapt the UX flow to support different models (I would also like to add support for Ollama to run local models). Do you have any ideas? |
sounds good! With local models I mean that the models are running on the user's machine. There are a couple of multi-modal models that Ollama supports (bakllava and llava). Ollama server runs locally and exposes an API to interact with the model. Regarding your second point, some users would like to be able to proxy the calls to OpenAI/Google (#2), so adding an option to customize the base URL/headers would be useful. We don't need to do it as part of this ticket, but good to have it in mind while we redesign the UI. |
Hey there, I'm catching up on this issue. I 'd like to help to add Gemini support to pixel2flutter. Have you made any progress on it ? |
hey @tinoper, I didn't find time to work on this, unfortunately, so happy if you could take it over! |
Ok, I'll give it a try! |
Awesome! This invite link should work: https://discord.gg/x4qbhqecVR |
Thank you, I'm in. |
Hey @tinoper , Indeed, those methods were missing from I'm not planning to release a new version of LangChain.dart in the coming days, as I'm doing some refactoring that I want to release altogether. But you can depend on the commit that contains the API key getters/setters (I've pushed a branch with the dependencies pointing to that commit, you can work on top of it). One of the main points of LangChain is to provide a unified API for different LLM providers, which is something useful for this project. The migration from |
The ChatGoogleGenerativeAI class doesn't have the supported streaming method (line 198). Not sure if it's worth it considering they might switch to the official package soon. |
Ahh true, I forgot streaming was pending to be implemented. I'll take over the ticket to migrate |
Hey @davidmigloz! Sorry, I have not prioritized this work; I have been very busy! I made an initial effort some time ago. In case it helps you, here is that attempt: luisredondo/langchain_dart#1 Let me know if I can help with something. |
@tinoper |
@luisredondo no worries! I imagined that. I've halted the migration as the official client is still not as feature complete as |
@davidmigloz Thank you for your quick response. Now I have the stream, but I'm encountering an issue when mapping it similar to what It's not clear to me if this is something we should address in the mapper or in the content we're sending. I'm trying to send it in the same way we do with the OpenAI option for consistency. |
@tinoper step by step 😄 Google models don't support system messages (messages with role Anyway, the way you prompt Gemini is normally slightly different than the way you prompt GPT-4. OpenAI models are quite good at following instructions, whereas Google models are better at following examples. So we'll have to experiment until we find a good prompt for Gemini. |
@tinoper the new release of LangChain.dart is out, so there's no need to override the dependencies anymore |
Hey @davidmigloz , big changes in LangChain.dart then but before updating I'd like to try getting something by querying Gemini. It's not clear to me if PromptValue is able to handle the query without ChatMessage.system(_systemPrompt). If that's the case, I can only remove that line and include another complete prompt in ChatMessage.human. Maybe to avoid making too much noise here, I'll contact you via Discord if you agree. I've tried using the Google client outside of here to see how it handles it and the prompt should change quite a bit but it could serve as an alternative. |
Hey @tinoper , Yes, I've quickly integrated I've tested it hardcoding my Google AI key, and it seems to be working fine. You can take it over from there 🙂 What's missing:
Thanks for your work and feel free to contact me on Discord! |
Wow |
@davidmigloz , sorry for the hold-up. What do you think of a segmented button like this for entering the API key? |
@tinoper Looks good! Thanks for following up |
Hey @tinoper , Did you manage to get it working? I can take it over from where you are otherwise. I want to add support for Anthropic as well. |
I haven't gotten to it yet, but I'll make time to review it today and add it over the weekend. I'll let you know if I have any questions before then. Does that work for you? |
Awesome! Yes, that's perfect |
https://langchaindart.com/#/modules/model_io/models/chat_models/integrations/googleai
The text was updated successfully, but these errors were encountered: