Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't create useless FunctionGraph in linker tests #1022

Open
ricardoV94 opened this issue Oct 8, 2024 · 11 comments · May be fixed by #1107
Open

Don't create useless FunctionGraph in linker tests #1022

ricardoV94 opened this issue Oct 8, 2024 · 11 comments · May be fixed by #1107

Comments

@ricardoV94
Copy link
Member

ricardoV94 commented Oct 8, 2024

Description

Our Numba/JAX/PyTorhch backends create a FunctionGraph that is passed to compare_py_and_x, but this is useless. We only use it to extract the inputs/outputs, so we could pass those directly.

Also the whole set_test_value/get_test_value nonsense. We are deprecating the test value machinery, so we should just pass the test values directly without setting/getting them.

@AdvH039
Copy link

AdvH039 commented Oct 29, 2024

I would like to work on this issue. Thanks.

@ricardoV94
Copy link
Member Author

I would like to work on this issue. Thanks.

Feel free to open a PR

@AdvH039
Copy link

AdvH039 commented Nov 1, 2024

How do I test for the jax and pytorch backend scripts? Cause when I run the test suite in the test directory I don't see those scripts being run. Thanks.

@ricardoV94
Copy link
Member Author

How do I test for the jax and pytorch backend scripts? Cause when I run the test suite in the test directory I don't see those scripts being run. Thanks.

You need to install those packages as they're optional. Otherwise the tests are skipped automatically

@AdvH039
Copy link

AdvH039 commented Nov 4, 2024

I’m currently working on the issue and have noticed a few things. While the FunctionGraph’s handling of inputs and outputs may seem unnecessary, it actually improves code writability. When initializing the FunctionGraph, we can simply provide the outputs to the graph and the corresponding inputs are understood by the graph implicitly. This is especially helpful when dealing with many inputs, saving time and effort. IMHO Unless removing the FunctionGraph significantly enhances performance or memory usage of the test, I believe we should keep it. Thank you.

@ricardoV94
Copy link
Member Author

You need to specify test values for the inputs so there's already a need to manually specify the inputs anyway.

@AdvH039
Copy link

AdvH039 commented Nov 22, 2024

If we are removing the set_test_value machinery then that would mean all the inputs we are passing would be constants right?(not variables)
Or is there a better way to initialise variables without using set_test_value machinery?
And if we are just passing constants as inputs into the output then we would not need to pass input_type and inputs in to the modified compare_py_and_x function right?
Thanks.

@ricardoV94
Copy link
Member Author

ricardoV94 commented Nov 22, 2024

The variables can be initialized the very same way without the test values attached to it. Don't need (and shouldn't) be replaced by constants

@AdvH039
Copy link

AdvH039 commented Nov 22, 2024

compare_numba_and_py( inputs, [g], [ i.tag.test_value for i in inputs if not isinstance(i, SharedVariable | Constant) ], )

@ricardoV94 In a lot of places the input values are set through the test values of variables (especially when we use @pytest.mark.parametrize), if the test values are not attached to the variables how should we access them?
Thanks.

@ricardoV94
Copy link
Member Author

ricardoV94 commented Nov 24, 2024

Change parametrize to be a tuple of (variable, test_values) instead

@AdvH039 AdvH039 linked a pull request Nov 28, 2024 that will close this issue
11 tasks
@AdvH039
Copy link

AdvH039 commented Nov 28, 2024

@ricardoV94 please let me know about further steps to be taken. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants