Constructing pose graph of images from legacy reconstruction system by setting fragment size = 1 #6906
Unanswered
marioblue5
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello all!
I want to construct a pose graph from RGB-D images, and I've been trying to utilize Open3D for this. I utilize the legacy reconstruction system to make, register, and refine fragments which are derived from just one image. I do this in hopes of finding a pose for each image/camera. This is my current script:
My main issue is that the pose graphs constructed are rather inaccurate, specifically their rotations/angles. I'm using the first 50 images of the Stanford Lounge dataset for proof of concept. The reconstructions from open3D come out just fine when using the pose graph, but when I try and fit it into NeRFStudio for Gaussian Splatting the quality falls off. One thing I'm unsure if is if I'm exporting the posegraph correctly from Open3D to NeRFStudio's format.
The data convention/format for NeRFStudio is:
While Open3D is just a 16 element 1D array:
My code above uses the "F" Fortran-Style "Column Major" which I believe is correct since the coordinates at the bottom match up. However, I was unable to find any documentation on the formatting of the 16-element array which open3D utilizes.
Once again, my goal is to create an accurate pose graph for RGB-D images. If anyone can give any advice or tips or documentation that would be greatly appreciated. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions