-
Notifications
You must be signed in to change notification settings - Fork 258
Error when preprocessing data #35
Comments
I have the same problem here:
Have you fixed it? |
Hi Mihaela, I personally haven't tried running the code on a cloud VM but I wouldn't necessarily expect that type of setup to have the OpenGL required to do the preprocessing, unfortunately. This error message is likely an indication that the OpenGL implementation provided by the VM is missing some required features. When you say you were running in headless mode, do you mean that the VM was headless or that you ran the code in headless mode as described in the README (i.e. using 'export PANGOLIN_WINDOW_URI=headless://') Alex, are you also running on a VM? |
@tschmidt23 I am running in locally. I noticed that the script actually outputs the .npz files, regardless of the mentioned error message. |
I also have met the same problem like: However, I could get all processed npz files. But when I tried to train the network, it encountered the error below: How can I fix it? |
I also met the leaf variable issue when using Pytorch 1.2.0. |
My torch version is 1.3.1. If I use version 1.0.0 ,will it not match with the CUDA version? My cuda version is 10.2.89 |
@CuiLily that is a separate issue and is actually related to a bug in pytorch, see https://discuss.pytorch.org/t/why-am-i-getting-this-error-about-a-leaf-variable/58468/9 and the related GitHub issue here pytorch/pytorch#28370. It is a known issue with version 1.3.0 and apparently happens on 1.3.1 and 1.2.0 as well.. Solutions for now are to use an older version (as @zhujunli1993 did) or to disable the maximum norm constraint for latent codes. This can be done by removing "CodeBound" from the specs.json. |
@tschmidt23 I followed your advice by removing "CodeBound". When I was training at epoch 5, I met such errors: Process finished with exit code 1 ` |
I did
However, I also have met the same problem like: After this error is happened, preprocess_data.py couldn't go forward. @tschmidt23 |
I have the same error: Any solution from the authors? Is it possible to let us know the build environment they used? e.g. OS? versions of each packages? Thank you! |
I have the same problem. However, it did still process the dataset on my dev PC. I need to wait ~1 min per object. It did nothing on my training machine, maybe because OpenGL wasn't set up correctly. |
I have the similar problem "unable to read texture XXXX". However the error message is
I have no idea about panglin. Can you help me solve this problem. BTW I was running it on nvidia-docker. |
Did you run "export PANGOLIN_WINDOW_URI=headless://" before preprocessing it? |
I commented out the line and use headless pangolin and it seems to be chugging out stuff slowly, ~1 min per object like #35 (comment) |
I have the same problem, but the script does not output the .npz files. How could I do? |
About this Error
I submitted an issue in Pangolin. But this error doesn't matter. The preprocessing still works even with this error. |
I have this same error.
The npz files are still produced, but the preprocessing runs unacceptably slow, something like 2 minutes per mesh per thread. |
@nikwl were you able to make the preprocessing run faster? |
@danielegrattarola I wasn't no, I just ended up reimplementing the preprocessing from scratch. This pypi library was helpful: https://pypi.org/project/mesh-to-sdf/ |
@nikwl oh god, two days spent trying to get this to work and there was a python package all along 😵 Thanks |
@danielegrattarola If you only need the sdf values then that library should serve you pretty well. You'll just need to build the infrastructure to run it on shapenet if that's your goal. |
Hey would you mind sharing your scripts by this pypi library? |
I wish I had come across this page earlier, could have saved me 2 days of hair pulling 🤷 |
I followed the instructions on how to setup the environment and when I ran the preprocessing script I got many lines with the following two errors.
Unfortunately, nothing is generated in the output folder. I have used all the latest versions of the dependencies. I am running the script on a could VM in headless mode. What could be the problem?
The text was updated successfully, but these errors were encountered: