You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It appears that the fit method for ensembles is also where the estimators are instantiated. It would be convenient (for example for fine-tuning pretrained ensembles) if the instantiation and training happened in separate steps. Would it be possible either to decouple the instantiation and training steps to allow for the continuation of training? Is the functionality for continuation of training already available in some other way?
The text was updated successfully, but these errors were encountered:
It seems like this might be straightforward to implement for any class where all estimators are initialized at once (ie I think adversarial, bagging, fusion, gradient boosting, soft gradient boosting, and voting):
# Instantiate a pool of base estimators, optimizers, and schedulers.
estimators = []
for _ in range(self.n_estimators):
estimators.append(self._make_estimator())
For fast geometric and snapshot, it seems like you could still manage a list and just update from the last element in the list for continuation (instantiating the first estimator into an otherwise empty list for a new ensemble).
Hi @jtpdowns, I think you are right. It would be convenient if we could decouple the training part and the model initialization part. Will appreciate a pull request very much.
It appears that the
fit
method for ensembles is also where the estimators are instantiated. It would be convenient (for example for fine-tuning pretrained ensembles) if the instantiation and training happened in separate steps. Would it be possible either to decouple the instantiation and training steps to allow for the continuation of training? Is the functionality for continuation of training already available in some other way?The text was updated successfully, but these errors were encountered: