Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add options dialog to control CPU/GPU mode, and graphics card selection #16

Open
ctrueden opened this issue Dec 11, 2018 · 0 comments
Open

Comments

@ctrueden
Copy link
Member

@frauzufall and I talked through how we could go about providing user-facing control over whether TensorFlow operates in CPU mode or GPU mode, as well as which graphics card is selected in GPU mode. I pushed the skeleton of an options plugin for ImageJ to expose those options (47f6d33 on the options-dialog branch), but there are several challenges:

  • For mode, we need to set org.tensorflow.NativeLibrary.MODE=something before the DefaultTensorFlowService class loads. I.e.: before the SciJava Context is created. So: in effect, we would need a restart.

  • For graphics card selection, we need to set CUDA_VISIBLE_DEVICES and/or CUDA_DEVICE_ORDER env vars, again before TensorFlow initializes, i.e. before SciJava context is created. So again: restart.

But even with a restart: how do we ensure that when the JVM starts up, that these sys props and env vars are set early enough? What machinery could do this? We do not have anything in place. We could have a file containing desired key/values, that the launcher Java code reads and sets as early as possible—before instantiating the SciJava Context. But this would be a new feature. Perhaps it could be part of ImageJ.cfg?

An alternative could be to make sure the TensorFlowService does not reference any TensorFlow classes. E.g., similar to the LegacyService and IJ1Helper of imagej-legacy, it might work to have something like tfService.actions().loadModel(...) as a level of indirection. Might be worth trying, but even if it works, it is quite ugly and unintuitive. Before we do that, let's verify whether creating an SJ context really initializes TF early—maybe it doesn't.

We could submit a pull request to the TensorFlow project that defers the loading of the native library until the first time any TensorFlow operation is performed. I.e.: eliminate the static initializers, since they create problems as described above. But it would be a non-trivial change to TensorFlow to do that, potentially more difficult to maintain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant