You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks a lot for this clean piece of software 👍
I am using .itertrain() to train a deep autoencoder with a train and a validation dataset (numpy.ndarray) supplied, while doing various monitoring tasks.
The problem is the inefficient execution footprint since for every invocation of .itertrain() the datasets are copied from host to GPU. Correct me if I am wrong :)
So, in order to overcome this bottleneck, I tried to supply the datasets as theano.shared objects to .itertrain() which doesnt seem to work. After consulting the documentation of .itertrain(), it is not possible to supply a theano.shared object directly, but a downhill.dataset.Dataset object can be supplied, which, in turn, can be populated with a theano.shared object.
Unfortunately, this also doesnt work... What am I doing wrong?
The text was updated successfully, but these errors were encountered:
First of all, thanks a lot for this clean piece of software 👍
I am using .itertrain() to train a deep autoencoder with a train and a validation dataset (numpy.ndarray) supplied, while doing various monitoring tasks.
The problem is the inefficient execution footprint since for every invocation of .itertrain() the datasets are copied from host to GPU. Correct me if I am wrong :)
So, in order to overcome this bottleneck, I tried to supply the datasets as theano.shared objects to .itertrain() which doesnt seem to work. After consulting the documentation of .itertrain(), it is not possible to supply a theano.shared object directly, but a downhill.dataset.Dataset object can be supplied, which, in turn, can be populated with a theano.shared object.
Unfortunately, this also doesnt work... What am I doing wrong?
The text was updated successfully, but these errors were encountered: