Skip to content

Commit

Permalink
Merge branch 'master' into tool_choice_option
Browse files Browse the repository at this point in the history
  • Loading branch information
keenborder786 authored Jan 25, 2025
2 parents 6f1e214 + dbb6b7b commit 793f435
Show file tree
Hide file tree
Showing 210 changed files with 6,426 additions and 4,029 deletions.
5 changes: 2 additions & 3 deletions .github/scripts/check_diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -304,9 +304,8 @@ def _get_configs_for_multi_dirs(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
elif file.startswith("docs/") or file in ["pyproject.toml", "poetry.lock"]: # docs or root poetry files
docs_edited = True
dirs_to_run["lint"].add(".")

dependents = dependents_graph()
Expand Down
92 changes: 50 additions & 42 deletions docs/docs/how_to/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,52 +25,10 @@ This highlights functionality that is core to using LangChain.
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: debug your LLM apps](/docs/how_to/debugging/)

## LangChain Expression Language (LCEL)

[LangChain Expression Language](/docs/concepts/lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol.

[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.

[**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL.

- [How to: chain runnables](/docs/how_to/sequence)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
- [How to: add default invocation args to runnables](/docs/how_to/binding/)
- [How to: turn any function into a runnable](/docs/how_to/functions)
- [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough)
- [How to: configure runnable behavior at runtime](/docs/how_to/configure)
- [How to: add message history (memory) to a chain](/docs/how_to/message_history)
- [How to: route between sub-chains](/docs/how_to/routing)
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)

## Components

These are the core building blocks you can use when building applications.

### Prompt templates

[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model.

- [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)

### Example selectors

[Example Selectors](/docs/concepts/example_selectors) are responsible for selecting the correct few shot examples to pass to the prompt.

- [How to: use example selectors](/docs/how_to/example_selectors)
- [How to: select examples by length](/docs/how_to/example_selectors_length_based)
- [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity)
- [How to: select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram)
- [How to: select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr)
- [How to: select examples from LangSmith few-shot datasets](/docs/how_to/example_selectors_langsmith/)

### Chat models

[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
Expand Down Expand Up @@ -101,6 +59,26 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: filter messages](/docs/how_to/filter_messages/)
- [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/)

### Prompt templates

[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model.

- [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)

### Example selectors

[Example Selectors](/docs/concepts/example_selectors) are responsible for selecting the correct few shot examples to pass to the prompt.

- [How to: use example selectors](/docs/how_to/example_selectors)
- [How to: select examples by length](/docs/how_to/example_selectors_length_based)
- [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity)
- [How to: select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram)
- [How to: select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr)
- [How to: select examples from LangSmith few-shot datasets](/docs/how_to/example_selectors_langsmith/)

### LLMs

What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of language models that take a string in and output a string.
Expand Down Expand Up @@ -329,6 +307,36 @@ large volumes of text. For a high-level tutorial, check out [this guide](/docs/t
- [How to: summarize text through parallelization](/docs/how_to/summarize_map_reduce)
- [How to: summarize text through iterative refinement](/docs/how_to/summarize_refine)

## LangChain Expression Language (LCEL)

:::note Should I use LCEL?

LCEL is an orchestration solution. See our
[concepts page](/docs/concepts/lcel/#should-i-use-lcel) for recommendations on when to
use LCEL.

:::

[LangChain Expression Language](/docs/concepts/lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol.

[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.

[**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL.

- [How to: chain runnables](/docs/how_to/sequence)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
- [How to: add default invocation args to runnables](/docs/how_to/binding/)
- [How to: turn any function into a runnable](/docs/how_to/functions)
- [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough)
- [How to: configure runnable behavior at runtime](/docs/how_to/configure)
- [How to: add message history (memory) to a chain](/docs/how_to/message_history)
- [How to: route between sub-chains](/docs/how_to/routing)
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)

## [LangGraph](https://langchain-ai.github.io/langgraph)

LangGraph is an extension of LangChain aimed at
Expand Down
159 changes: 158 additions & 1 deletion docs/docs/integrations/chat/anthropic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -315,6 +315,163 @@
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "301d372f-4dec-43e6-b58c-eee25633e1a6",
"metadata": {},
"source": [
"## Citations\n",
"\n",
"Anthropic supports a [citations](https://docs.anthropic.com/en/docs/build-with-claude/citations) feature that lets Claude attach context to its answers based on source documents supplied by the user. When [document content blocks](https://docs.anthropic.com/en/docs/build-with-claude/citations#document-types) with `\"citations\": {\"enabled\": True}` are included in a query, Claude may generate citations in its response.\n",
"\n",
"### Simple example\n",
"\n",
"In this example we pass a [plain text document](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents). In the background, Claude [automatically chunks](https://docs.anthropic.com/en/docs/build-with-claude/citations#plain-text-documents) the input text into sentences, which are used when generating citations."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e5370e6e-5a9a-4546-848b-5f5bf313c3e7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'text': 'Based on the document, ', 'type': 'text'},\n",
" {'text': 'the grass is green',\n",
" 'type': 'text',\n",
" 'citations': [{'type': 'char_location',\n",
" 'cited_text': 'The grass is green. ',\n",
" 'document_index': 0,\n",
" 'document_title': 'My Document',\n",
" 'start_char_index': 0,\n",
" 'end_char_index': 20}]},\n",
" {'text': ', and ', 'type': 'text'},\n",
" {'text': 'the sky is blue',\n",
" 'type': 'text',\n",
" 'citations': [{'type': 'char_location',\n",
" 'cited_text': 'The sky is blue.',\n",
" 'document_index': 0,\n",
" 'document_title': 'My Document',\n",
" 'start_char_index': 20,\n",
" 'end_char_index': 36}]},\n",
" {'text': '.', 'type': 'text'}]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"document\",\n",
" \"source\": {\n",
" \"type\": \"text\",\n",
" \"media_type\": \"text/plain\",\n",
" \"data\": \"The grass is green. The sky is blue.\",\n",
" },\n",
" \"title\": \"My Document\",\n",
" \"context\": \"This is a trustworthy document.\",\n",
" \"citations\": {\"enabled\": True},\n",
" },\n",
" {\"type\": \"text\", \"text\": \"What color is the grass and sky?\"},\n",
" ],\n",
" }\n",
"]\n",
"response = llm.invoke(messages)\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "69956596-0e6c-492b-934d-c08ed3c9de9a",
"metadata": {},
"source": [
"### Using with text splitters\n",
"\n",
"Anthropic also lets you specify your own splits using [custom document](https://docs.anthropic.com/en/docs/build-with-claude/citations#custom-content-documents) types. LangChain [text splitters](/docs/concepts/text_splitters/) can be used to generate meaningful splits for this purpose. See the below example, where we split the LangChain README (a markdown document) and pass it to Claude as context:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "04cc2841-7987-47a5-906c-09ea7fa28323",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'text': \"You can find LangChain's tutorials at https://python.langchain.com/docs/tutorials/\\n\\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.\",\n",
" 'type': 'text',\n",
" 'citations': [{'type': 'content_block_location',\n",
" 'cited_text': \"[Tutorials](https://python.langchain.com/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.\",\n",
" 'document_index': 0,\n",
" 'document_title': None,\n",
" 'start_block_index': 243,\n",
" 'end_block_index': 248}]}]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import requests\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_text_splitters import MarkdownTextSplitter\n",
"\n",
"\n",
"def format_to_anthropic_documents(documents: list[str]):\n",
" return {\n",
" \"type\": \"document\",\n",
" \"source\": {\n",
" \"type\": \"content\",\n",
" \"content\": [{\"type\": \"text\", \"text\": document} for document in documents],\n",
" },\n",
" \"citations\": {\"enabled\": True},\n",
" }\n",
"\n",
"\n",
"# Pull readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"# Split into chunks\n",
"splitter = MarkdownTextSplitter(\n",
" chunk_overlap=0,\n",
" chunk_size=50,\n",
")\n",
"documents = splitter.split_text(readme)\n",
"\n",
"# Construct message\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" format_to_anthropic_documents(documents),\n",
" {\"type\": \"text\", \"text\": \"Give me a link to LangChain's tutorials.\"},\n",
" ],\n",
"}\n",
"\n",
"# Query LLM\n",
"llm = ChatAnthropic(model=\"claude-3-5-haiku-latest\")\n",
"response = llm.invoke([message])\n",
"\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
Expand Down Expand Up @@ -342,7 +499,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.4"
}
},
"nbformat": 4,
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/integrations/chat/groq.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"source": [
"# ChatGroq\n",
"\n",
"This will help you getting started with Groq [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html). For a list of all Groq models, visit this [link](https://console.groq.com/docs/models).\n",
"This will help you getting started with Groq [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html). For a list of all Groq models, visit this [link](https://console.groq.com/docs/models?utm_source=langchain).\n",
"\n",
"## Overview\n",
"### Integration details\n",
Expand All @@ -37,7 +37,7 @@
"\n",
"### Credentials\n",
"\n",
"Head to the [Groq console](https://console.groq.com/keys) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:"
"Head to the [Groq console](https://console.groq.com/login?utm_source=langchain&utm_content=chat_page) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/chat/openai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| βœ… | βœ… | βœ… | βœ… | ❌ | ❌ | βœ… | βœ… | βœ… | βœ… | \n",
"| βœ… | βœ… | βœ… | βœ… | βœ… | ❌ | βœ… | βœ… | βœ… | βœ… | \n",
"\n",
"## Setup\n",
"\n",
Expand Down
Loading

0 comments on commit 793f435

Please sign in to comment.