Skip to content

Commit

Permalink
Merge branch 'langchain-ai:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
quintinoandre authored Nov 14, 2024
2 parents 3315fe9 + 4b641f8 commit a49a781
Show file tree
Hide file tree
Showing 95 changed files with 4,864 additions and 1,228 deletions.
1 change: 0 additions & 1 deletion .github/scripts/check_diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
PY_312_MAX_PACKAGES = [
f"libs/partners/{integration}"
for integration in [
"anthropic",
"chroma",
"couchbase",
"huggingface",
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/_integration_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
run: |
make integration_tests
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -308,6 +308,7 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}

Expand Down
6 changes: 3 additions & 3 deletions MIGRATE.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Migrating

Please see the following guides for migratin LangChain code:
Please see the following guides for migrating LangChain code:

* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrate to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)

The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help automatically upgrade your code to use non deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
eNrNVstu20YUbZbNqv0DYtDHRpRE01ZsFUXhyInbxI6d2ECcxIEwIkfiRNQMMzNU5BpcNO0P8BPaKFIgOA+jAbJouy266A+4i35L75DUK5ZtZFEn2lAzcx9nzn3MfdxvEyEpZxcOKFNEYEfBQsaP+4I8DIlUP/VaRHnc7W5ubG0/CQU9+txTKpDlQgEHNI+Z8gQPqJN3eKvQtgotIiVuENmtcXfvnwvuPmrhTlXxJmESlQ2rODefM9BQCnbu7SPBfQL/UCiJQHDqcIDClN667WH1pTSUR4xHBMNHGJQZW5gZVwVmDpUOzxmV5W9QdF+b5S7xtZrj49Alpm0umB6mzdD0sYLLaONSCYJbIKRESGCtOPczHAy3EhwNoqqZN63hEukIGmhi9OkqUVN46oK3DGz43MFaJK9VKAtCVZWOR1oYdPZRACQRoWhy5X00FE4Wai9I3AIyyhooisCAZp8K4mpgY2l9x6E0rz0gjgLp+1HfI9iFMP770Sddj0sVHx4LzUvsOCRQJmEOd8FL/LzxPQ1yhkvqmpoBUM5IEvt40CQkMLFP26SXasWvcBD4NAVReCA5O8hCZGo0x48HOpImBJip+M3yEEdhcw8yiRnFvH0pb73qmFJhynxIBQgPQOoFyflvkwcBdppgx8yyNO6lyi8mZbiMn65jZ2NryiQWjhc/xaJVmv91cl+ETNEWifuVzePussOxOztvWflLh1OG5R5z4qd17EtyOCJ5pDKYK87ZZrFkFq03U6aJEnumw8FD/HPxxZBAn7CG8uIn9kLpmSAygOIjP/ZATYXycReCRf7+q5+Vyy8b18eh/rS7AoGL/7hN3Jxh2cYN3jbA9bxhlcrzdtkqGavr2weVzM22jtORoUhHFUhb76R18JXheFhIor4OVd1cPNyGqpJ1CN6VYaL0HS9kTeIOKjNT5HkFQ5ab2g+UcfyMcdPRO0doTI0AoD5tUWVmTQUCrpdxd75YLB59caqkgBKiTCPp2ktLS2fYBQqJil9rIkzLMi17e0jH3SNjlmbamTI8PY0HEH12iuQYTw/waGnjVOkT8Qwy0CZ149/hf7VoLSwuXN9hrte5Vlsv8dbK3eZa55L1pE1xPLDyltHgvOGTl5WrZsr6VpImcX/lzo3l9e8qBzvmLV7jQMM2BroYZ6S3RQRkZjxwfB66UOuC9ED91vKd+PUimSvai/PWXN2u2bXSknl5Yytp2z/00lb0z8d/JrlSNrIErEJeCnXRxQqPGxeaOkS54RqV9xF19blswN3stfblRbuy9+2NoL56J2ytidVH7RUQn7YCG+lzgLCUFEyyxGTa2I/3dU0svClzIDN6Ne7dz0Ev5UEVElzqHstC38+2pOacOWS4GY6QJh17+E7Z9kIO8VBN7llRZOhfdPFixkrmsVqDBt2czc0MEYBKmUs6qFzMTZ9rHJmeLlTNjf6UEYom3QYQmrf8GChIno4ToAF5Cp8OLRGZgpbuTEMaiWXA1ohCKSvZ79wRQGpCCyNOU7/Jk1jOH4kTCqETcIqQ9wFkNJlwAWDO3//b4xk8FACFUTyk5twh5dFZtcuDs0qXBxNuz4zuu3cD65RuADNqFWYqLexm61D31Ic327Ur69cJaW/aOzevhUs7lU4HpNJJdnqQTVscWI2i/yU7reOhSJuqnghHwgHcmWI/2cw62/vGsL87nrCjD4WYXWjquyipJJjx3y+WiVL+cAha3kXRqdnzrjVtndS2h/PNzItNHaKJi0xNIJMVPGMOiSYGkbemjtJSdAKcWfebPEPji/wHR26g7g==
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
eNqFVF1MHFUUXlJJ1RdLNLYxJk4X0VS5uzM7W2BRk9JdFPnpImxTadPiZeayM93Ze4eZO5uupFax9UVFrz7U1FgrLLt1RSiRpBK01bSNxoga00SwsY1GH0x9MJo+kCbiHdjlJyDO08z5/c73nTN9uRSybJ3gkmEdU2RBhfIPm/XlLNTjIJsezSYR1YiaaY22xwYdS5+p0Cg17Vq/H5q6D2KqWcTUFZ9Ckv6U5E8i24ZxZGe6iJqeMXq9SXiok5IEwra3VpDEQLBS8BaDuGVfr9ciBuJvXsdGlpd7FcKRYOqaGpBhEO/h/W4OUZHh2hQDOioCMtgONKgnHGBAyoF6D+c0BFU+zVXPpoxGbMrGViEchYqCTAoQVoiq4zj7MP6sblYKKup2q+R5a4zmKWD5BEImgIaeQtmFLHYGmqahK9D1+w/aBA8XoAKaNtFqd96dCPBBMWVn64o4/K1pTigWRJ9c7ZPOHAI2hTo2OCV8Eg4pa877J5c7TKgkeB1QEItlF5JHlscQmw21QCXavqIktBSNDUErWRX8aLndcjDVk4jlwq2r2xWcS+1knyT5qsdWFLbTWGFD3dCw0dgiyYsp+YAYkIFYBUTp7IrSiFppoBDegb0njhQJNBCOU40NSmLNaQvZJt9B9GKWp1HH7stwsdDXX+YKazMQbVqSuiwT4cKxT/cgtVKQZGEXSQm8dVCQqmqDcq0UEJ5oiQ2HC21ia+o0FrMgtru5VvXFvcgpmoMTSM2H19yIGe/SxBbvb+hJnYLCyXAd3U+WCYqiOPPAupEWSnJm3I4ZORQK/U9dzgyibNydD0gSkORYYUpp74ywVubC4RXwZF08HNH960Qu4SlGC+tG/weewN58ATTQVfYJf+8UpfagaWuhWFOgOdzaE8YojSO74cHBlA5ZXvJJQpyQuIFGw4+DMFQ0BNrn1We5SMeuupYnw8NPgzbSRTgNMcjpwgSjbDuy+MKxvGIQR+UnbKEsT2+r62DjNSggyjWBKlFVQ3JXVQjs5JdR3INFnTPu/c//q17g22Zx08Xf7nv5Vs/8s6G5f6rxonjXsc6Kk87tyezgG1N/0AM7yuobDd87YhLnJp5q+n2yaWPpno3n/r7QsenRypcub0c3/vzr8uyVWfbu9R+i5+c+lp1HXvnu9XQG1G+b6D8l9ZwvOd7W8MvADuWZ6GOVP15F0Usj05sf3PzNhDj0/gffb9HG7z028Nk5cOTtndf23Xzo7i9O/Rq586fT5Ue2VbMToYe/3ZpqnJ6cff7G/vLbXpuK+HqOltZfazjpfe7VyPHmsYrpt24eKO+MXind2t9xz8/ypVt6Rz/v+OfNr4av38HnmZvb4FFi2oWyEo/nX+5BZNA=
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
eNqFVF1sFFUUbosRHtQUUERCcboBUejsznS2he6D2G6haumP7dZSTK13Z253pjt77zBzd2FZ+9AqmlhNHIOiMajIsluW2lKokIiYINji7wMPmDahogKmivyYEEXFeme725+01n3Zufd855zvnO+c2x4PQd1QMMrsUhCBOhAJPRhme1yHW4LQIM/HApDIWIpWV9V69gZ1ZXCFTIhmuBwOoCl2gIisY00R7SIOOEK8IwANA/igEfViKTyU6YnYAmBbE8F+iAybi+G5fGceY0uj6M1TEZuOVUi/bEED6jZqFTGlgoh1VS8DstJgiAyZrRDQP51REFMLELNBB0hUDBHnMe7idbbWRisslqBquYkqCEqQFdgCVgaKP8iqgNBirOAEYzWVF4FAMq8PkqZUdAshQUPUFc1qhGUtg2RK/mYdBxjAqFgEFsRuuShIC5ImQ5RhAFCfiE2jTYE6UZIlRmxpcPJAwloyrUF0Bflsra00gNVtRYeSRWwCbdWURmNvCxQJRTe2xmUIJCrbcEZ2VMYGMXunSdEDRBFqhIVIxBLNYn7g265oeYwEm61WJGiLEUxqbSb8EGosUJUQjI15mQeBpqnKGAlHi4FRV0oS1mIz3ZywlGOpoIiYR4vTPBzVYTo5iOHswho7f3AbaxCgIJVKT+WglGJa0n5sskEDop/GYVNTacbGnLsnY7Bh7qsAYlXtlJBAF2VzH9ADhc7Dk+/1ICJKAJpxd/X0dCnjRDrBzvP2Nb1TAhthJJr7moFqwN7xJo+7JPK5fIHlClmOPzolNCR6mBUxzWDu4brTDVQh8hHZ3Cs4uU4dGhpdNvhcjLqRoNEepWLBr07HU+vxflX5hNTzo6VUOPN4PZTyGF5gKnGIoamdDF/ocgouelNW4elyp9J4ZtSp10OXxmimWq1Pz0VclIPID6WEe8aJGLRNVKzT/KoSUAibehuojtbRjDo5jht8YFakTjdDQVbGqFBUVPQ/cWlnIDH7rPpYnmd5wZOqMn/zIDOT59gDk+ITs/hQRstnQU7wSaOZWdH/wUfYnEiRZhXJ/Jh+N3F8qZOv31pe1rJJqwv6/BvDdZXrt/iMvSEFmAnezjM+jH0q7HFvYN2APhlsbVJ9M17aUFlc8Zi7axNbg72YtsEDaLsQRjBWC3U6cGZCVHFQoiuswxh1ryluMPvWwnxOWJtfVOAtbBa8hUVsCd2M9ByM6xy19j/5KLfFxh6ezzKL7u+Yl5H8zZGqz5Wf5LJH6z89/sdrj69b6h2+FN14os11KremJNQHlTMFuwKNI6P2/S+1Vw2cu/XJlds7jzLvbYHzVx6oONx04cuL/+T2PP3tW8sjw0u3n3+ktSNzxzefL2hz+4KLH/2ppexMtiZfXlQwr3rzn+yGBmfHqoarz+7sH6lbEH9Iqz6bvSTr+sN/ywveyYqsyrri6Gu+5+c3bpwV6vYM8YlXc56o2ZFz4M6I+cLF3c5E6Av9yJuHu+2rvb89+Ff54ps9hz786Jr91G50x3HHDxdixxrXvb1w2XeXtt78foTPDT+5+vwz0sJtK2qu/3pvZ+zGbd6MwOvRoSXZ1xxZL/84d27ZEda1E2Xd6n130V2/L3ulJLvf//WJqp67T+5yF4de7G87fei+ub+MXL3syhmooj0aHZ2T0edPDOzPzMj4F2C/3M4=
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
eNqFVGtsFFUU3spDSNDwsuoP43SpbQid3ZlOKWyXCHVpa2laSrvKy7Lenbm7M3T23unM3UK7VEMp/IAYO6gYkSDIdpesLXSVhBhAQ7CJCDEGiKYlxQSEVBvwUWIgUfHOdrelAXH/7Nx7vnPOd853zm2PN0PdUDDK6lYQgToQCT0YZntch01haJCOWAgSGUvR2pX13kNhXel/QSZEM0qcTqApDoCIrGNNER0iDjmbeWcIGgYIQiPqx1LLQJY3Yg+BzT6CGyEy7CUMzxUWFTD2DIrerI/YdaxC+mUPG1C3U6uIKRVErKvVMiD5BkNkyGyCgP7pjIKYeoCYch0gUTFEXMB4Spfa2xqssFiCquUmqiAsQVZgF7IyUBrDrAoILcYKTjBW03kRCKXyBiHxpaNbCAkaoq5oViMsawUkE/IHdBxiAKNiEVgQh+WiIC1MfIYowxCgPhG7RpsCdaKkSozYM+DUgbRoqbQG0RUUtLe10QBWtxUdShaxcbRVUwaN/RuhSCi6oS0uQyBR2a7YZkZlbBAz+YAUR4EoQo2wEIlYolnMnmCrohUwEgxYrUjQFiOY0tpMNEKosUBVmmFs1MvsBZqmKqMknBsNjLrTkrAWmwfNCUs5lgqKiHm8NMPDWdtCJwcxnENY5OB7N7MGAQpSqfRUDkoppqXsJ+43aEBspHHY9FSasVHnI/djsGF2VQNxZf2EkEAXZbML6KHios/uv9fDiCghaMY9tQ+mSxvH0wkOnncsSk4IbLQg0ewKANWAybEmj7kkCrlCgeWKWY4/PiE0JHoLK2KawTzIHck0UIUoSGTzkFDEHdahodFlg9ti1I2EjfYoFQue/zqeXo+PV1aNSz0rupwKZ55aDaUChheYGtzM0NRFDF9cUiSU8AuZimpvtyedxvtQnZJeujRGgGpVlpmLuCiHUSOUEp6HTkS/fbxineZXlZBC2PTbQHW0jma0iOO4/rxHInW6GQqyMkYFl8v1P3FpZyAxj1n1sTzP8oI3XaWwrp95mOfoA5PmE7P4UEa5j0CO88mgmUei/4PPwnWJNGlWkcyT9NvH8aVevM6vlVVt3FwceKW+pnCVJq0Q6g41K8BM8A6eCWIcVOFRTznrAfTJYOtT6pvx5WtrSqsrPd1r2Drsx7QNXkDbhTCCsXqo04EzE6KKwxJdYR3GqHtd6Vrz2GJYyAmLBaEY+BcL/mIX+xLdjMwcjOkctfY/9ShvjY0+PH1ZG57fNc2W+k16bdWqqjPLZvyzoGLDls/fCg/W/loN24c6klsPv3x2oPvUzdbjg9f3Gbe/rPrk6m/zc+9+scW3e97QY3MvHTw/9azq+iD/h5+vX8xhv41/dKLvalPrvemz7/R2ZZflvF7JnHZfzOYmP/cMu61j4ClpWSnisi9VRa5t2N46J3LmtPbujB2ezk73OenZS8O/b8q7WHHuVnv7CUFcXhZY8KRj5+41Wpn97SvvwGT+TztPrnE5b8h9PRUfDne656Pvht4beVX0LZm5a29H/p97r1Rq2wuONbh88xqD7jfcPw5/6t8aWXL7oPnmnn3C2hk39w82DSwJbJPqdlyfFh/uG7nw/fD7W27kud1Tp2cP71k9O7g0e31isHxqePK1P2qy5/61qG1K9UheTu/lwH6OHJhz98gs+emzNyZ/Uy7mfnXr8V/u9HQmL3c+ceACKThd9mJPQ271yBSb7d69SbYVd/6Wr2bZbP8COXvwZg==
2 changes: 1 addition & 1 deletion docs/docs/concepts/chat_history.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Most conversations start with a **system message** that sets the context for the

The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task.

So a full conversation often involves a combination of two patterns of alternating messages:
A full conversation often involves a combination of two patterns of alternating messages:

1. The **user** and the **assistant** representing a back-and-forth conversation.
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks.
Expand Down
10 changes: 5 additions & 5 deletions docs/docs/concepts/chat_models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview

Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific tuning for every scenario.
Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific fine tuning for every scenario.

Modern LLMs are typically accessed through a chat model interface that takes a list of [messages](/docs/concepts/messages) as input and returns a [message](/docs/concepts/messages) as output.

Expand Down Expand Up @@ -85,7 +85,7 @@ Many chat models have standardized parameters that can be used to configure the
| Parameter | Description |
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | The name or identifier of the specific AI model you want to use (e.g., `"gpt-3.5-turbo"` or `"gpt-4"`). |
| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.1) makes them more deterministic and focused. |
| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.0) makes them more deterministic and focused. |
| `timeout` | The maximum time (in seconds) to wait for a response from the model before canceling the request. Ensures the request doesn’t hang indefinitely. |
| `max_tokens` | Limits the total number of tokens (words and punctuation) in the response. This controls how long the output can be. |
| `stop` | Specifies stop sequences that indicate when the model should stop generating tokens. For example, you might use specific strings to signal the end of a response. |
Expand All @@ -97,9 +97,9 @@ Many chat models have standardized parameters that can be used to configure the
Some important things to note:

- Standard parameters only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these.
- Standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``.
- Standard parameters are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`.

ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the [API reference](https://python.langchain.com/api_reference/) for that model.
Chat models also accept other parameters that are specific to that integration. To find all the parameters supported by a Chat model head to the their respective [API reference](https://python.langchain.com/api_reference/) for that model.

## Tool calling

Expand Down Expand Up @@ -150,7 +150,7 @@ An alternative approach is to use semantic caching, where you cache responses ba

A semantic cache introduces a dependency on another model on the critical path of your application (e.g., the semantic cache may rely on an [embedding model](/docs/concepts/embedding_models) to convert text to a vector representation), and it's not guaranteed to capture the meaning of the input accurately.

However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times.
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider, costs, and improve response times.

Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/document_loaders.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ loader = CSVLoader(
data = loader.load()
```

or if working with large datasets, you can use the `.lazy_load` method:
When working with large datasets, you can use the `.lazy_load` method:

```python
for document in loader.lazy_load():
Expand Down
8 changes: 4 additions & 4 deletions docs/docs/concepts/lcel.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

The **L**ang**C**hain **E**xpression **L**anguage (LCEL) takes a [declarative](https://en.wikipedia.org/wiki/Declarative_programming) approach to building new [Runnables](/docs/concepts/runnables) from existing Runnables.

This means that you describe what you want to happen, rather than how you want it to happen, allowing LangChain to optimize the run-time execution of the chains.
This means that you describe what *should* happen, rather than *how* it should happen, allowing LangChain to optimize the run-time execution of the chains.

We often refer to a `Runnable` created using LCEL as a "chain". It's important to remember that a "chain" is `Runnable` and it implements the full [Runnable Interface](/docs/concepts/runnables).

Expand All @@ -20,8 +20,8 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t

LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:

- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Optimized parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guaranteed Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).

Other benefits include:
Expand All @@ -38,7 +38,7 @@ LCEL is an [orchestration solution](https://en.wikipedia.org/wiki/Orchestration_

While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/architecture#langgraph).

In LangGraph, users define graphs that specify the flow of the application. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.
In LangGraph, users define graphs that specify the application's flow. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.

Here are some guidelines:

Expand Down
Loading

0 comments on commit a49a781

Please sign in to comment.