Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assertion issue in Chapter 12 (hypertuning) #1150

Open
maltenform opened this issue Jan 17, 2025 · 3 comments
Open

Assertion issue in Chapter 12 (hypertuning) #1150

maltenform opened this issue Jan 17, 2025 · 3 comments
Assignees

Comments

@maltenform
Copy link

maltenform commented Jan 17, 2025

I'm running the hyperparameter code:

progressr::with_progress(expr = {
rr_spcv_svm = mlr3::resample(task = task,
learner = at_ksvm,
# outer resampling (performance level)
resampling = perf_level,
store_models = FALSE,
encapsulate = "evaluate")
})

and I get

Error in assert_learner(fallback, task_type = self$task_type) :
Assertion on 'fallback' failed: Must inherit from class 'Learner', but has class 'NULL'.

I saw #1110 and tried downgrading my versions (to 0.90 of extralearners and 21.1 of mlr3, as the 0.90 of extra learners said it needed at least 21.1) but no success.

Is this a bug with a known fix?

If it helps, I'm using R 4.4.1 on Mac running 15.2

@Nowosad
Copy link
Member

Nowosad commented Jan 23, 2025

@maltenform It seems to be working when you remove encapsulate = "evaluate" (however, I cannot explain why this happens; maybe relates to https://github.com/mlr-org/mlr3/pull/1109/files ??, @jannes-m )

progressr::with_progress(expr = {
  rr_spcv_svm = mlr3::resample(task = task,
                               learner = at_ksvm, 
                               # outer resampling (performance level)
                               resampling = perf_level,
                               store_models = FALSE)
})

@maltenform
Copy link
Author

And so it does work! Have it running now. Thank you!

@jannes-m
Copy link
Collaborator

Well, leaving out the encapsulation parameter will use the default which is NA which means that one error due to only one failed model will stop the entire resampling. Hence, this is not desirable instead we would like to have a fallback learner in these cases.
In any case, this seems to be a mlr3 bug. Therefore, I have opened a corresponding issue, see mlr-org/mlr3#1249

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants