Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ns-train in2n error; loaded pretrained model results bad. #92

Open
chkmook opened this issue Feb 21, 2024 · 6 comments
Open

ns-train in2n error; loaded pretrained model results bad. #92

chkmook opened this issue Feb 21, 2024 · 6 comments

Comments

@chkmook
Copy link

chkmook commented Feb 21, 2024

It seems that the Instruct-nerf2nerf pipeline is not properly loading the pre-trained nerfacto model.

The first image is the result of ns-viewer with pre-trained nerfacto model.
The second image is the rendered image before training “ns-train in2n” with pre-trained nerfacto model.

A few weeks ago, I confirmed that it was working properly with same script file, but after downloading the repo again and checking, I cannot see the same results as before.

When looking at the results of the viewer(upper image), it seems that nerfacto has been trained well, but when looking at the initial nerf rendered image of in2n(below image), it seems that the pre-trained nerfacto model is not being properly loaded.
Is there anything missing or something I should check?

The setup follows the Torch 2.1.2 with CUDA 11.8 settings from https://docs.nerf.studio/quickstart/installation.html. After installing nerfstudio, I downloaded the instruct-nerf2nerf repo again to ensure that in2n could function properly.\

Screenshot 2024-02-22 at 12 15 47 AM Screenshot 2024-02-22 at 12 14 05 AM
@ayaanzhaque
Copy link
Owner

So the issue here is that once you train nerfacto and then render it after loading it, the results look worse here?

@chkmook
Copy link
Author

chkmook commented Feb 22, 2024

Yes.
From the results rendered in the viewer, I think there are no issues in the pretraining process of Nerfacto model.
But, the results look worse when loaded at in2n pipeline.

@ayaanzhaque
Copy link
Owner

That is super odd. Is this after some iterations of in2n have been run or no?

@chkmook
Copy link
Author

chkmook commented Feb 22, 2024

Yes. This is the result from right after starting.

@ayaanzhaque
Copy link
Owner

have u tried it after letting it train for some time? not sure why this would be the case, but maybe the first few iterations of in2n updates kinda mess things up. You can also launch with pausing the training or in inference mode so that you can see before any in2n steps are done

@chkmook
Copy link
Author

chkmook commented Feb 22, 2024

I'll try pausing the initial iterations of training and use it later. Thank you for answering!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants