Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation code roomlayout.m of cooperative_scene_parsing don't work on the output #21

Open
chengzhag opened this issue Oct 20, 2020 · 3 comments

Comments

@chengzhag
Copy link

Hi Yinyu:

I followed your guide to evaluate the results using code from cooperative_scene_parsing. With GT downloaded and some essential changes to path settings, the roomlayout.m runs successfully on the results predicted by your work.

However, the roomlayout.m outputs the same IoU of 0 on every sample. Here is sample 1 as an example:
image

I tried to turn on the visualization option looking to find the reason:
image

The predicted layout bounding box showed in the visualization above is nothing like the visualization with your code utils/visualize.py:
image
image

Is there anything I missed? If some more changes need to be made on the evaluation code from cooperative_scene_parsing, Is it possible to open-source your version of the final evaluation code?

@chengzhag
Copy link
Author

Sorry to bother you again at this inappropriate time. In case you missed my last email, I'm trying to do some work based on your paper. Eagerly to compare with your work on the same matrics. May I ask to get the evaluation code you implemented based on cooperative_scene_parsing? If the code is not ready to publish on github, may I get the code before it's published? Hope to not take up you much time.

@YanjunLIU-ac
Copy link

@pidan1231239 Hello there, I am also doing some work based on this project, and I also ran into the problem similar to you, where in my case the IoU of 3d bounding box is always 0 using the MATLAB evaluation code from "Cooperative scene parsing". Have you solved this problem? Any ideas on how to make it work properly and get the final evaluation results?

@chengzhag
Copy link
Author

Hi, I've contacted Yinyu via email and got the following reply:

Sorry for the late reply as we are currently pushing for a deadline. I think we would update the evaluation code after these busy days.
But I think the reason you get the IoUs with 0s is that you used the GT data from their work. Actually, we used a different world coordinate system to build our system. You can visualize (or save by yourself) our gt data from the visualization code by changing 'prediction' to 'gt' or ‘both’ in the arguments below:
image

Hope this can help you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants