Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparing different dependency-aware approaches when the size of x_explain is large #387

Closed
aliamini-uq opened this issue Apr 4, 2024 · 2 comments

Comments

@aliamini-uq
Copy link

Dear @martinju,

Thanks again for introducing shapr package.
Inspired by Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features, I want to compare the performance of different dependency modeling approaches with Abalone dataset. In this paper, the sizes of x_train and x_explain are 4077 and 100, respectively. However, I want to increase the size of x_explain to 1200. Based on issue #370, it seems there is a limit on the size of x_explain equal to 200. Would you please help me what should I do? My ultimate goal is to compare the performance of different approaches (e.g., independence, ctree, vaeac, and empirical) using plot_MSEv_eval_crit().
Last but not least, I am most grateful for your priceless time in advance.

Kind regards,
A

@aliamini-uq
Copy link
Author

Dear @martinju,

I will be honored if you take a look at my problem.

Kind regards,
A

@martinju
Copy link
Member

There is no limit on the size of x_explain. Just increase n_batches accordingly and should be fine (in theory). If you want to be on the safe side (not risiking losing anything) you can pass just a few x_explain at a time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants