-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about dataset and results #9
Comments
Hi there,
|
Thank you for your time and reply! 2.The precision of the Val set is ok. But the accuracy of the test is not so good. I see you are... src/data/ nu.py code says (For NYUDepthV2, crop size is fixed)height, width = (240, 320) crop_size = (228, 304), But in the model details of the paper it says crop size is 512x340. I want to know which one I should use as the patch width and height. If crop size in the code is fixed, does it not need to mark crop size as 512x340 on the command line when training NYU data set? |
For the second question is it ok to change the crop size from 228 to 240 first and observe the results? |
I'm stuck in building mmcv environment. Can you please tell me the version of your torch, torchvision, mmcv, mmdet and mmdet3d?I really appreciate it. |
I met the same situation with you. Could you please show me how did you fix it? Thanks! |
Why does the first epoch suddenly turn into nan halfway through training nyudepth, loss_sum causing Loss to become nan? |
Why does the first epoch suddenly turn into nan halfway through training nyudepth, loss_sum causing Loss to become nan? |
How to train nyudepth with a single GPU |
Hello! You say you have successfully trained the NYUv2,could you tell me your environment about the version MMDetection3D,MMDectection and MMSegmentation,mmcv-full? I have been puzzled by this version problem for a long time and i hope to receive a reply. Thanks. |
@Huskie377 It was really tricky for me to install as well. The setup that worked for me in the end was: python 3.8 For MM* libraries, since these are all older versions of the libraries - I had to clone the repos and git checkout older branches and install myself. I couldn;t use pip install or conda install directly. Be careful about Apex and installing it with cuda - sometimes if there is a mismatch with local cuda and the cuda that was used to install pytorch with (if the minor versions do not match eg local cuda 11.4 - pytorch cuda 11.2) apex won't build with cuda. Check out this issue: NVIDIA/apex#323 (comment) Installing is really tricky, but it works if you install everything carefully and make sure the versions between libraries are compatible, Good luck. |
Hi, I meet the same situation as the second problem you mentioed. Have you solved it ? If possible, could you please share me your commands about the training on NYU dataset. |
Hi there!Thanks for the great work!
I have some questions about data sets and training results.
The first is the kitti dataset. I see in the README file that you use the raw portion of the kitti dataset. When I downloaded the raw part, I found that there are many kinds of RAW parts classified by date on the official website. It is very troublesome to click one by one to download. I would like to ask if you can provide some scripts to support one-click download. I clicked for a long time to download the dataset of all the dates, but found that it was about 180 gigabytes. I would like to know what parts of raw copy to kitti data set you used, can I download it specifically?
The second problem is that I have successfully trained the NYU-V2 data set, but the final result is not so good. When the epoch is 20, test_RMSE is about 0.50. I want to know if I made a mistake using the command. Can you provide commands about the NYU data set.
(patch_height 340 --patch_width 512 -loss 1.0L1+1.0L2+1.0*DDIM --epochs 30 --batch_size 16 --max_depth 10.0 --save NAME_TO_SAVE --model_name Diffusion_DCbase_ --backbone_module swin --backbone_name swin_large_naive_l4w722422k --head_specify DDIMDepthEstimate_Swin_ADDHAHI)
Thanks again for your open source sharing!
The text was updated successfully, but these errors were encountered: