You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I've read your paper "Self-Supervised Camera Self-Calibration from Videos" and got some inspiration from it.
According to the paper in order to get proper intrinsics one has to train the full pipeline on a big dataset like KITTI, for example.
But it seems that this process is quite redundant, because one should wait for some time(30-50 epochs). Did you try to learn the full pipeline on a few samples, for example 10? It seems that the full pipepline should reach overfiting and after that we can get a plausable intrisics.
The text was updated successfully, but these errors were encountered:
Hi! I've read your paper "Self-Supervised Camera Self-Calibration from Videos" and got some inspiration from it.
According to the paper in order to get proper intrinsics one has to train the full pipeline on a big dataset like KITTI, for example.
But it seems that this process is quite redundant, because one should wait for some time(30-50 epochs). Did you try to learn the full pipeline on a few samples, for example 10? It seems that the full pipepline should reach overfiting and after that we can get a plausable intrisics.
The text was updated successfully, but these errors were encountered: