Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learner fallback wishlist #803

Open
jemus42 opened this issue Dec 4, 2023 · 0 comments
Open

Learner fallback wishlist #803

jemus42 opened this issue Dec 4, 2023 · 0 comments

Comments

@jemus42
Copy link
Member

jemus42 commented Dec 4, 2023

During my current benchmark setup, I have learned a few things I wish I had read in the book before:

  1. When doing nested resampling with an AutoTuner, the "inner" learner can have a fallback, which will trigger if there are errors during the inner resampling loop.
    However, if there are errors during the outer resampling loop, the AutoTuner itself also needs a fallback, otherwise it can crash the entire tuning process.

  2. When constructing a GraphLearner, the fallback should be added to the "finished" GraphLearner object. If the base learner gets a fallback and is then wrapped into a GraphLearner, the GraphLearners's $fallback will be NULL and errors will be silently ignored and not show up in the error column in ResampleResults.
    This is the worst kind of failure: The silent one 🙃
    In my mind this feels like a potential use case for a note-box or something.
    Big ⚠️ and 🚨 and everything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant