forked from tmc/langchaingo
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync with upstream #12
Draft
Abirdcfly
wants to merge
82
commits into
kubeagi:dev
Choose a base branch
from
Abirdcfly:dev-update
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Abirdcfly <[email protected]>
chore: Pinning chroma-go ahead of major new release
Add huggingface documentation
A new functionality was introduced to allow the setting of a specified data format (currently Ollama only supports JSON). This is done via the `WithFormat` option. The change provides more flexibility and control over the format of data processed by the client. Moreover, the test `TestWithFormat` has been added to assert the proper functioning of this new feature. Using the JSON format allows you to simulate `Functions` using prompt injection, as it forces Ollama to respond in valid JSON.
Add an example showing how the Ollama `JSON` format in combination with prompt injection can be used to simulate Functions.
* main: chains: fix add ignore StreamingFunc (tmc#639) Update huggingface.mdx chore: Pinning chroma-go ahead of major new release Clean up sequential_chain_example to make it a bit more readable (tmc#635) Fix llm_math_chain example (tmc#634) Update comments, bump example dependencies and clarify chain example (tmc#633) chains: fix issue with overriding defaults in chainCallOption (tmc#632) chains: add test with GoogleAI (tmc#628) Revert "googleai: fix options need add default value" (tmc#627) googleai: fix options need add default value
Fix agent stream callback
feat: add JSON format option to ollama
feat: run integration tests for vector databases using testcontainers-go modules
* Add bedrock embedding provider * Add bedrock tests --------- Co-authored-by: Travis Cline <[email protected]>
* Update `chroma-go` to the latest version * Add error handling to NewOpenAIEmbeddingFunction * Add a new property to the store (`openaiOrganization`) and pass it to `chroma-go`
vectorstores: Add support for OpenAI Organization ID header in Chroma
change mrkl-prompt to python version
examples: Point to v0.1.5
1. add pgvector into test 2. add OPENAI_API_KEY and GENAI_API_KEY into test 3. deprecate pgvector table name Sanitize function 4. reset pgvector Search sql and make TestDeduplicater rerun 5. add test TestWithAllOptions for test all option 6. because of StuffDocuments.joinDocuments ignore document's metadata, update some tests Signed-off-by: Abirdcfly <[email protected]>
Signed-off-by: Abirdcfly <[email protected]>
It seems like a bunch of links were broken and docs.langchain.com now redirects to the python docs.
* llms/anthropic: complete support for messages api * llms/anthropic: fixed linting errors * llms/anthropic: remove fatals * llms/anthropic: fixed linting errors * llms/anthropic: remove fatal from completions * llms/anthropic: Default to use messages api, update example to use Opus --------- Co-authored-by: Travis Cline <[email protected]>
…Content with images (tmc#713) * llms/bedrock/internal/bedrockclient: Currently, antropicBinGenerationInputSource.Type is fixed to base64. According to the Claude3 API documentation, the current image input format only accepts base64. https://docs.anthropic.com/claude/reference/messages_post Therefore, the existing implementation will generate the following error when making a request with an image ```` operation error Bedrock Runtime: InvokeModel, https response error StatusCode: 400, RequestID: 00000000-0000-0000-0000-0000000000000000,. ValidationException: messages.0.content.0.image.source: Input tag 'image' found using 'type' does not match any of the expected tags: 'base64' exit status 1 ``` This commit corrects the above error and allows Claude3 to be called via Bedrock with image input. * llms/bedrock/internal/bedrockclient: Consider MultiPart MessageContent. The current implementation of llms/bedrock/internal/bedrockclient/provider_anthropic.processInputMessagesAnthropic does not seem to account for MessageContent containing multiple Part MessageContent with multiple parts. Passing a MessageContent like the following will result in an error. ``` []llms.MessageContent{ { Role: schema.ChatMessageTypeHuman,. Parts: []llms.ContentPart{ llms.BinaryPart("image/png", image), llms. TextPart("Please text what is written on this image."), llms. }, } }, } }, } ``` ``` operation error Bedrock Runtime: InvokeModel, https response error StatusCode: 400, RequestID: 00000000-0000-0000-0000-0000000000000000, ValidationException: messages: roles must alternate between "user" and "assistant", but found multiple "user" roles in a row ```` This is due to the fact that []llms.MessageContent is converted to []bedrockclient.Message. So, this commit fixes the above by modifying the procssInputMessagesAnthropic code. Chunking the argument []bedrockclient.Message with a group of the same Role. Then, each Chunk is converted to anthropicTextGenerationInputMessage. * llms/bedrock/internal/bedrockclient: fix lint for Consider pre-allocating `currentChunk` (prealloc) golang-ci lint error message ``` Error: Consider pre-allocating `currentChunk` (prealloc) ``` fix this * llms/bedrock/internal/bedrockclient: fix lint goconst fix golang lint ``` string `text` has 3 occurrences, but such constant `AnthropicMessageTypeText` already exists (goconst) ```
* Add Cloudflare Workers AI LLM * lint fixes * text generation: support streaming response * add tests * review fixes * minor http client fix
…mc#715) * examples: add new example for Added sample code for OCR using Claude3's Vision feature with Bedrock
* llms: added mistral
…#722) * go: Update to go 1.22, update golangci-lint config * lint: Address various lint issues * chains: fix lint complaint in TestApplyWithCanceledContext * lint: Address addtional lint issues * lint: Address addtional lint issues * tools: update golangci-lint to 1.57
* examples: Add nvidia completion example * examples: Tidy up examples * examples: Point nvidia example to main
…pport (tmc#709) * openai: Take steps to make tool calls over the older function calling API * openai: Additional steps to evolve towards newer tool calling interface * openai: Connect tool calling for openai backend * openai: Fix up lint issue * examples: pull httputil use * tools: iterate on tools support * openai: Fix up tool call response mapping * llms: Cover additional message type in additional backends * examples: temporarily point to branch * openai: change type switch for ToolCallResponse * examples: Clean up and refactor openai function calling example * mistral: respect ChatMessageTypeTool
* feat: add Seed in mistral * feat: add Seed in mistral , openai issue tmc#723
* googleai: combine options for googleai and vertex * lint
* googleai: add safety/harm settings * tests: make configuration options testable
feat: update image
feat: update postgres image
feat: update mysql image
* feat: update qdrant image * feat: add opts in test, because default model embedding not work * feat: add opts in test, because default model embedding not work * chore: removed 5 lines for lint
examples: clearify openai-function-call-example clean up the flow and don't use globals
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
update to tmc@8b67ef3 (v0.1.8 add 11 commit)
Highlights: