-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about dynamic environment rendering #22
Comments
Hello, thank you for your comment. With NVIDIA Warp as the chosen rendering framework, when we use Warp, we need to perform the computation of the bounding volume hierarchy for a given mesh so that the ray-casting operating provides correct results. This computation takes quite some time to perform compared to other computations. When the position of the mesh vertices changes, this computation needs to be done at every update of the environment. Doing this slows down the simulation a lot. It it nevertheless possible to do it, we just have not implemented direct support for it in the codebase. To enable it yourself, basically, you need to call Do note however that this will slow down simulation but you will still be able to use all the sensors that are available with our Warp implementation. However if you are using the sensors that are available in BOTH the Warp-based implementation and the native Isaac Gym rendering framework, it is generally faster for execution to use the latter. To train the VAE, we use this codebase: https://github.com/ntnu-arl/sevae/blob/main/sevae/train_seVAE.py. Note that this particular code trains a VAE to reconstruct exactly the input depth image based on this work. With an addition of a binary segmentation mask over regions that you would like higher accuracy for reconstruction (such as thin obstacles in our case). For the datasets to train the networks based on the linked repository above, we collected a large number of samples from the Aerial Gym Simulator consisting of depth images and segmentation masks. The thin obstacles were specifically labeled with a particular number in the The code for the exact VAE that is provided in the repository should be plug and play with the training script, however the image-augmentation pipeline that we use to create collision images based on this work is not open-source yet. I hope this answers your question. If you have any other questions, please feel free to ask them. Best, |
Hi,
Thanks for your excellent open-source work. I noticed that you mentioned in the documentation that :“If you are simulating dynamic environments, Isaac Gym sensors are required”. Does this mean that sensors implemented based on Warp cannot render dynamic environments? What is the reason for this? Could you provide me with some detailed information?
Additionally, could you tell me some details about your training process for the VAE? Thank you very much for your help!!
The text was updated successfully, but these errors were encountered: