Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Tool.from_component #159

Closed
wants to merge 20 commits into from
Closed

feat: Add Tool.from_component #159

wants to merge 20 commits into from

Conversation

vblagoje
Copy link
Member

@vblagoje vblagoje commented Dec 18, 2024

Why:

Automates the conversion of Haystack components into LLM tools.

What:

  • Added component_schema.py, that converts component run parameters into a JSON tools schema format.
  • Modified tool.py to introduce a from_component method, enabling the creation of Tool instances from Haystack components. This makes tool creation and integration into pipelines more dynamic.
  • Added comprehensive unit tests in test_tool_component.py, validating the conversion and functionality of components as tools, including testing various data types and nested structures.

How can it be used:

  • Convert a component to a tool:
    tool = Tool.from_component(
        component=myComponent,
        name="my_tool",
        description="A tool for processing data"
    )

and the use it as a Tool following the established patterns.

How did you test it:

  • Conducted unit tests covering various component types such as simple return types, data classes, and Pydantic models.
  • Performed integration tests with pipelines utilizing both OpenAI and Anthropic backend models, verifying tool correctness and error management.

Notes for the reviewer:

  • Pay attention to the handling of nested data structures and nullable types to ensure they are mapped and invoked correctly.

@coveralls
Copy link

coveralls commented Dec 18, 2024

Pull Request Test Coverage Report for Build 12653556844

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.5%) to 83.634%

Files with Coverage Reduction New Missed Lines %
dataclasses/tool.py 2 98.26%
Totals Coverage Status
Change from base Build 12624157864: 0.5%
Covered Lines: 2136
Relevant Lines: 2554

💛 - Coveralls

Copy link
Member

@anakin87 anakin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the effort you are putting into this PR.

I do have some questions and suggestions.

The requirement was to support components with basic str or native Python types as input. This implementation appears to go beyond that, which is great for flexibility but might become hard to maintain.

  • What's the advantage of supporting Pydantic models here? Maybe I am just missing some reasonable use cases...

As I commented in the PR, if I remember correctly, one of the initial requirements was to enable the deserialization of Tools from YAML, which would be feasible if Tools are treated as components. Is it possible? If yes, can we add some tests to cover this?

Given the main goal of this PR, would it make sense to involve someone from the DC team in the review?

haystack_experimental/dataclasses/tool.py Outdated Show resolved Hide resolved
Comment on lines 224 to 227
msg = (
"Component has been added in a Pipeline and can't be used to create a Tool. "
"Create Tool from a non-pipeline component instead."
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please explain this?

If I remember correctly, one of the requirements was about deserializing Tools from YAML (which should be feasible if Tools are components). I'm not totally sure...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I thought we can have a component declared but not be part of the pipeline. Maybe not, depending on that we can remove this check.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't understand if this is a self-imposed limitation (I don't think so) or there are strong reasons to avoid that. Could you please explain this point further?

@vblagoje
Copy link
Member Author

vblagoje commented Dec 19, 2024

Thanks for the effort you are putting into this PR.

I do have some questions and suggestions.

The requirement was to support components with basic str or native Python types as input. This implementation appears to go beyond that, which is great for flexibility but might become hard to maintain.

  • What's the advantage of supporting Pydantic models here? Maybe I am just missing some reasonable use cases...

The TypeAdapter from pydantic takes care of range of json to object conversion be these objects strings or other native Python types, dataclasses or Pydantic objects.

It is in fact, harder and longer codebase to implement a version of Tool.from_component that only supports strings, native Python types and their lists as we need to detect cases we don't support, raise errors and so on.

TypeAdapter.validate_python takes conversion of all of these. Unit and integration tests for dataclasses and Pydantic models I included showcase how this is indeed possible with almost no code. We even support our own Document class as show in test examples.

Given these newfound benefits of TypeAdapter which enable proper conversion support with 10 lines of well tested pydantic code - I went for that solution although it was not required by the requirements.

As I commented in the PR, if I remember correctly, one of the initial requirements was to enable the deserialization of Tools from YAML, which would be feasible if Tools are treated as components. Is it possible? If yes, can we add some tests to cover this?

Yes, I need to review this part as well.

Given the main goal of this PR, would it make sense to involve someone from the DC team in the review?

Yes, @mathislucka is on PTO and he had the most context. Let's wait for him

@vblagoje vblagoje marked this pull request as ready for review December 20, 2024 14:07
@vblagoje vblagoje requested a review from a team as a code owner December 20, 2024 14:07
@vblagoje vblagoje requested review from anakin87 and removed request for a team December 20, 2024 14:07
Copy link
Member

@anakin87 anakin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the simplification you have made.

I left other comments and asked Julian to take a look as well.

Comment on lines 5 to 7
from .component_schema import create_tool_parameters_schema

__all__ = ["create_tool_parameters_schema"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok makes sense, will do 🙏

Copy link
Member

@julian-risch julian-risch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tool.from_component functionality is great! I have some feedback about the tests and suggest we add defaults for tool name and tool description so that the from_component requires only a component as input for standard use.

For the tests:
I suggest replacing gpt-4o with gpt-4o-mini if that works. Also consider replacing claude-3-5-sonnet-20240620 with a simpler model if that works?

I understand that the long instructions like "Concatenate these documents: First one says 'Hello world' and second one says 'Goodbye world'. Set only content field of the document only. Do not set id, meta, score, embedding, sparse_embedding, dataframe, blob fields." are meant to test the implementation. Still, I suggest that we use Document instances as input directly too. For example, have one e2e test with a retriever as a tool from component and another component that takes documents as input as a second tool.
As the test for OpenAI and Anthropic are very similar, let's use parametrization for the tests, which will reduce lines of code. You might need to use pytest.skip() within the test to skip if api keys are not available then.
Last but not least, we should have a test case that covers loading tools from components defined in a pipeline yaml. That should be a YAML where a ToolInvoker is defined and it's init parameter for tools is a list of components. The ToolInvoker would need to call from_component internally then as we discussed offline.

Create a Tool instance from a Haystack component.

:param component: The Haystack component to be converted into a Tool.
:param name: Name for the tool.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to make this PR more complex but have we considered using defaults for name and description? For example the component name as the default for name and parts of the component docstring extracted via docstring_parser for description?
That would also enable the ToolInvoker to call from_component internally if the input to its tool parameter is a list of components instead of tools, which I think is needed for defining tools as components in a yaml.

@vblagoje
Copy link
Member Author

vblagoje commented Jan 9, 2025

Abandoned and superseded by deepset-ai/haystack#8693

@vblagoje vblagoje closed this Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Tool.from_component for components with str/native python types as input
4 participants