Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved bug with incompatible QoS policies #276

Closed
wants to merge 2 commits into from

Conversation

tudoroancea
Copy link
Contributor

Public-Facing Changes

None

Description

I changed 3 files for 3 different reasons:

  1. CMakeLists.txt: fixed build issue with websocket++ that wasn't found properly. The issue was also mentioned in this other PR. In comparison with it, I did not add the /usr/local/include that wasn't cross-platform.
  2. ros2_foxglove_bridge/src/parameter_interface.cpp: line 97 raised an error when compiling on macOS. My guess is that the default clang compilation flags don't allow implicit calls to constructors.
  3. ros2_foxglove_bridge/src/ros2_foxglove_bridge.cpp: this solves the following issue.

Copy link
Collaborator

@achim-k achim-k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of submitting all changes in one PR, could you submit a separate PR for each logically independent change?

Comment on lines +32 to +33
find_path(WEBSOCKETPP_INCLUDE_DIR NAMES websocketpp/websocketpp.hpp)
message(STATUS "WEBSOCKETPP_INCLUDE_DIR: " ${WEBSOCKETPP_INCLUDE_DIR})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be necessary, WEBSOCKETPP_INCLUDE_DIR should be populated by find_package(websocketpp REQUIRED) (it should fail otherwise)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some reason, when building ros-foxglove-bridge with a ROS 2 installation from robostack (in a conda env), CMake/clang installed in the same env, and with websocketpp installed globally via homebrew or apt, this variable was not populated correctly. However, if I install websocketpp in the conda env, it works just fine.
I will try to understand further the bug

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also look into adapting the robotstack foxglove_bridge patch. That patch might need some updating as it's relatively old, but building on MacOS worked back then

@@ -81,6 +83,7 @@ target_include_directories(foxglove_bridge_base
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/foxglove_bridge_base/include>
$<INSTALL_INTERFACE:include>
)
target_include_directories(foxglove_bridge_base SYSTEM PUBLIC ${WEBSOCKETPP_INCLUDE_DIR})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for a new target_include_directories, you can add it to the one above

Comment on lines +497 to +530
// If there are no other publishers, we can't determine anything on reliability or durability,
// so we keep the system default.
if (publisherInfo.size() == 0) {
RCLCPP_WARN(this->get_logger(),
"No publishers on topic '%s'. Falling back to QoSReliabilityPolicy.RELIABLE as it "
"will connect to all publishers",
topic.c_str());
} else {
if (reliabilityReliableEndpointsCount > 0) {
RCLCPP_WARN(
this->get_logger(),
"Some, but not all, publishers on topic '%s' are offering QoSReliabilityPolicy.RELIABLE. "
"Falling back to QoSReliabilityPolicy.BEST_EFFORT as it will connect to all publishers",
topic.c_str());
// If all endpoints are reliable, ask for reliable
if (reliabilityReliableEndpointsCount == publisherInfo.size()) {
qos.reliable();
} else {
if (reliabilityReliableEndpointsCount > 0) {
RCLCPP_WARN(
this->get_logger(),
"Some, but not all, publishers on topic '%s' are offering QoSReliabilityPolicy.RELIABLE. "
"Falling back to QoSReliabilityPolicy.BEST_EFFORT as it will connect to all publishers",
topic.c_str());
}
qos.best_effort();
}
qos.best_effort();
}

// If all endpoints are transient_local, ask for transient_local
if (durabilityTransientLocalEndpointsCount == publisherInfo.size()) {
qos.transient_local();
} else {
if (durabilityTransientLocalEndpointsCount > 0) {
RCLCPP_WARN(this->get_logger(),
"Some, but not all, publishers on topic '%s' are offering "
"QoSDurabilityPolicy.TRANSIENT_LOCAL. Falling back to "
"QoSDurabilityPolicy.VOLATILE as it will connect to all publishers",
topic.c_str());
// If all endpoints are transient_local, ask for transient_local
if (durabilityTransientLocalEndpointsCount == publisherInfo.size()) {
qos.transient_local();
} else {
if (durabilityTransientLocalEndpointsCount > 0) {
RCLCPP_WARN(this->get_logger(),
"Some, but not all, publishers on topic '%s' are offering "
"QoSDurabilityPolicy.TRANSIENT_LOCAL. Falling back to "
"QoSDurabilityPolicy.VOLATILE as it will connect to all publishers",
topic.c_str());
}
qos.durability_volatile();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here you could just add !publisherInfo.empty() && to the if conditions.

if (!publisherInfo.empty() && reliabilityReliableEndpointsCount == publisherInfo.size()) {

} else {
  // best effort (compatible with all reliability QoS policies)
}

if (!publisherInfo.empty() && durabilityTransientLocalEndpointsCount == publisherInfo.size()) {

} else {
  // durability_volatile (compatible with all durability QoS policies)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. I just added a separate if condition to call RCLCPP_WARN. I will remove it in the next PR

@tudoroancea
Copy link
Contributor Author

I am not sure why the CI failed tho. It did so with the first other PR I created: #277

@achim-k
Copy link
Collaborator

achim-k commented Nov 29, 2023

I am not sure why the CI failed tho. It did so with the first other PR I created: #277

Build failures are related to osrf/docker_images#697. Might take a bit until the base docker images are rebuild to include the updated key. If this doesn't happen, we will have to do that manually in our ROS1/ROS2 Dockerfiles.

@achim-k
Copy link
Collaborator

achim-k commented Dec 12, 2023

@tudoroancea are you still working on this?

@tudoroancea
Copy link
Contributor Author

Yes @achim-k, I was just waiting on #277 to be closed. I have now opened #278 for the 3rd point

@achim-k
Copy link
Collaborator

achim-k commented Dec 12, 2023

@tudoroancea since both mentioned PRs are in, this can be closed now?

@mikaelarguedas
Copy link

Build failures are related to osrf/docker_images#697. Might take a bit until the base docker images are rebuild to include the updated key. If this doesn't happen, we will have to do that manually in our ROS1/ROS2 Dockerfiles.

FYI the ROS images on dockerhub have been rebuilt with the updated GPG key.
More details at osrf/docker_images#697 (comment)

@achim-k achim-k closed this Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants