-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for multi-sensor scenarios #232
Conversation
Signed-off-by: Ignacio Vizzo <[email protected]>
The frame of reference it is wrong in this change actually. You are transforming the point cloud into the base frame before deskewing, which is not correct. The location of the points will be completely different in this frame, we should always keep it in LiDAR frame. Same holds for the downsampling I think, we will get a different set of "keypoints", probably it doesn;t matter for performances but the pipeline will give different results when using the ros wrapper and the python rosbag dataloader. |
It might be an idea to change the frame of the resulting pose from KISS. So we compute the odometry in LiDAR frame as before, but then we change the frame to base_link (or whatever) right before publishing into the ROS echosystem. What do you think ? @nachovizzo @benemer ps: This will imply the also the local map must be expressed in the base_link frame before being published which it is probably a pain in the ass |
I agree that de-skewing and downsampling should happen in the sensor frame. We had a look at moving these outside of the registration as a sort of pre-processing step, but this becomes quite messy since we need for example the previous poses for de-skewing. |
Thanks for the feedback, guys, I'm trying to find an alternative solution to this problem, and will be back soon'ish! |
@tizianoGuadagnino @benemer ready for another round of reviews ... |
Should these changes to the base frame logic apply to the core library rather than the ROS wrappers? For instance, I would like to run
to process bags offline. If it were made possible to support this via the C++ or Python API, these changes to the ROS wrapper may be redundant. (I'm still learning about this library, so let me know if I'm off base here.) |
Very good observation 👏. Indeed this make sense. What doesn't seem probable to me (but I might be now thinking big) is why a user would be playing with the python pipeline in such scenarios. If you have one use case, please comment. We always tried to keep the core library as simple and as application-independent as possible. The ROS wrappers are and example of an application that consumes the core library , and it's always why I liked the suggestions of my colleagues of not interfering with the clouds. Kiss sees and process everything in an egocentric world, the only thing is the LiDAR. Any other ideas? Thanks for commenting :) |
The use case I am considering is offline bag analysis for a vehicle with multiple lidars. In my workflow, I need to create a fused pointcloud throughout an entire trajectory using three lidars. Because this analysis is done offline, the Python pipeline is superior to a pipeline using ROS transport for two reasons:
That said, I appreciate the disciplined design decision that Kiss operates in an egocentric world, using only LiDAR. It is not too costly to convert all three lidars into a common frame prior to feeding a bag to Kiss. (as an aside, my project's needs for registering the point cloud to GPS coordinates and for accommodating imprecisely known sensor extrinsics might end up pulling me away from KISS-ICP and toward a more general optimization framework, though I would prefer to keep it simple) |
Approved from my side :) |
This will make KISS-ICP work ego-centric as default
And fix a small bug, the order was of the transformation before was the opposite, and therefore we were obtaining base2cloud. Since we multiply by both sides we can't really see the difference, but it was conceptually wrong.
* Build system changes for tf fix * Modify params for tf fix * Add ROS 1 tf fixes similar to ROS 2 * Update rviz config * Remove unused debug publishers * Remove unnecessary smart pointers * Update ROS 1 to match ROS 2 changes
Fixing now the CI is a big pain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from my side.
@tizianoGuadagnino do you have any concern or may I merge it? |
I'm preparing the new release so I'm gonna merge it myself ;) thanks to everybody |
Description
These are the required changes to use KISS-ICP as an odometry source for multi-sensor scenarios. Although the ticket is
about the comparison, to have the odometry in the
base_footprint
coordinate frame, we need these changes.This PR also relates to and fixes #174
Tf tree fix
So the tf logic was a total disaster, and I hope to fix it with this PR. The basic idea is as follows.
base_frame
, in our case,base_footprint
(but it could bebase_link
), then all the point clouds must be expressed in that coordinate frame; for this reason, I changed the logic to spit the odometry seen from thebase_frame
eyes. If the user does not specify any particularbase_frame
, I interpret that it is running a LiDAR-only system (such as a dataset, etc.), and I keep the originalframe_id
from thePointCloud2
message.Pending changes
Acknowledgments
This fix was made possible thanks to Dexory