-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Add example using TypedDict
in structured outputs how-to guide
#27415
Conversation
…_output.ipynb` For me, the `Pydantic` example does not work (tested on various Python versions from 3.10 to 3.12 and Pydantic versions from 2.7 to the most 2.9). The `TypedDict` example (added in this PR) does.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Please, review this PR. |
@barseghyanartur what do you mean by it not working? |
It fails with an error. Would you like to see the traceback? |
It fails with an error. Would you like to see the traceback? |
Yes if you have it handy |
Two identical examples. One with OpenAI, another one with Ollama. This works Filename: simple_pipeline_structured_output/pipeline_openai_joke_multiple_schemas_pydantic.py from pprint import pprint
from typing import Any, Union, Optional
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
__all__ = ("main",)
model = ChatOpenAI(model="gpt-4o-mini")
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(
default=None, description="How funny the joke is, from 1 to 10"
)
class ConversationalResponse(BaseModel):
"""Respond in a conversational manner. Be kind and helpful."""
response: str = Field(description="A conversational response to the user's query")
class FinalResponse(BaseModel):
final_output: Union[Joke, ConversationalResponse]
structured_llm = model.with_structured_output(FinalResponse)
def main() -> dict[str, Any]:
"""Entrypoint of the pipeline."""
joke_result = structured_llm.invoke("Tell me a joke about cats")
print("\n" + "*" * 26 + "\n")
print("joke_result:")
pprint(joke_result)
conversational_result = structured_llm.invoke("How are you today?")
print("\n" + "*" * 26 + "\n")
print("conversational_result:")
pprint(conversational_result)
return {
"joke_result": joke_result,
"conversational_result": conversational_result,
}
if __name__ == "__main__":
main() This fails Filename: simple_pipeline_structured_output/pipeline_ollama_joke_multiple_schemas_pydantic.py from pprint import pprint
from typing import Any, Union, Optional
from pydantic import BaseModel, Field
from langchain_ollama import ChatOllama
__all__ = ("main",)
model = ChatOllama(model="llama3.1")
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(
default=None, description="How funny the joke is, from 1 to 10"
)
class ConversationalResponse(BaseModel):
"""Respond in a conversational manner. Be kind and helpful."""
response: str = Field(description="A conversational response to the user's query")
class FinalResponse(BaseModel):
final_output: Union[Joke, ConversationalResponse]
structured_llm = model.with_structured_output(FinalResponse)
def main() -> dict[str, Any]:
"""Entrypoint of the pipeline."""
joke_result = structured_llm.invoke("Tell me a joke about cats")
print("\n" + "*" * 26 + "\n")
print("joke_result:")
pprint(joke_result)
conversational_result = structured_llm.invoke("How are you today?")
print("\n" + "*" * 26 + "\n")
print("conversational_result:")
pprint(conversational_result)
return {
"joke_result": joke_result,
"conversational_result": conversational_result,
}
if __name__ == "__main__":
main() Trace:
|
P.S. Same with |
confirmed and created an issue here: #28090 |
Updated language in the how-to guide |
TypedDict
in the structured_output.ipynb
TypedDict
in structured outputs how-to guide
@barseghyanartur this is likely llama-3.1 just not being good enough and not following the structured output schema. TODO item is to figure out how to improve the error messages so it's easier for users to understand that they're bumping against a model limitation. |
For me, the Pydantic example does not work (tested on various Python versions from 3.10 to 3.12, and
Pydantic
versions from 2.7 to 2.9).The
TypedDict
example (added in this PR) does.Additionally, fixed an error in Using PydanticOutputParser example.
Was:
Corrected to: