You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a demo for INAIL after the first year of the project.
I write this issue to keep track of the steps needed for the demo, and the people responsible for each task.
DEADLINE 15 DECEMBER
The demo will consist in various sub-demos, and I will mark in bold the bare minimum requirement, in italic the optimum result.
Visual Language Navigation
The demo will consist in making a semantic map of the environment. The user will be able to ask the robot to search something on the map, and the latter will be able to navigate to that location.
The focus of the demo is to show the "AI" in the awareness of the environment, and its interaction (vocal) with the human. In this way the robot will be able to "fetch something" to aid the worker in the warehouse scenario.
Semantic mapping of the environment, live, with ergoCub
Querying of the map...
...Using vocal interaction
Showing on the map the interested location
Navigation to the closest result
Use more complex instructions: instead of: go to the chair, go to the RED chair, or go between the window and the table.
Teleoperation (metaCub)
The demo will consist in a user operating the robot and doing grasping operations. We can have simple pick and place operations, on simple objects, and then show the robot "learn" these operations and do them in autonomy. i.e. Behavior Cloning.
We can increase the difficulty of the demo by having more complex scenarios: like assembling a puzzle.
The focus of the demo is to shed light on the capability of the teleoperation system, and the "AI" in the grasping actions.
Person in charge: @andrearosasco@steb6
Robots needed: ergoCubSN001 or ergoCubSN002 (ergoCubSN000 hands are too small)
Tasks needed:
metaCub teleoperation
simple policies definition
Data aquisition on simple task
Testing on the robot
Collection of more complex tasks and policies
Few-Shot RGB Action Recognition (?)
todo
Human Avoidance in Narrow Space while Carrying Objects
Person in charge: @vigisushrutha23
Robots needed: ergocubSN001 or ergocubSN002
In this demo we will create a simple narrow space using the privacy screens and tall posters and showcase the robot using bimanual manipulation plus human avoidance navigation plugin to avoid oncoming humans. Exact details of this demo are still being constructed for the final version and requirements shall be updated.
Tasks needed:
Improve Bimanual manip to not have any drift.
Potentially integrate full navigation plugin with the existing BT
Define materials and setup (like the privacy screens and tall posters?)
Practice quick setting up of the demo in the robot arena (setup to be performed under 2 minutes.)
To the people not mentioned on this issue (@PasMarra, @vigisushrutha23 ) think what demo could you propose, if not for this year, for the next one.
For anybody: feel free to edit the issue to add other task, or expand them.
The text was updated successfully, but these errors were encountered:
We have a demo for INAIL after the first year of the project.
I write this issue to keep track of the steps needed for the demo, and the people responsible for each task.
DEADLINE 15 DECEMBER
The demo will consist in various sub-demos, and I will mark in bold the bare minimum requirement, in italic the optimum result.
Visual Language Navigation
The demo will consist in making a semantic map of the environment. The user will be able to ask the robot to search something on the map, and the latter will be able to navigate to that location.
The focus of the demo is to show the "AI" in the awareness of the environment, and its interaction (vocal) with the human. In this way the robot will be able to "fetch something" to aid the worker in the warehouse scenario.
Person in charge: @SimoneMic
Robots needed: any
Task needed:
Teleoperation (metaCub)
The demo will consist in a user operating the robot and doing grasping operations. We can have simple pick and place operations, on simple objects, and then show the robot "learn" these operations and do them in autonomy. i.e. Behavior Cloning.
We can increase the difficulty of the demo by having more complex scenarios: like assembling a puzzle.
The focus of the demo is to shed light on the capability of the teleoperation system, and the "AI" in the grasping actions.
Person in charge: @andrearosasco @steb6
Robots needed: ergoCubSN001 or ergoCubSN002 (ergoCubSN000 hands are too small)
Tasks needed:
Few-Shot RGB Action Recognition (?)
todo
Human Avoidance in Narrow Space while Carrying Objects
Person in charge: @vigisushrutha23
Robots needed: ergocubSN001 or ergocubSN002
In this demo we will create a simple narrow space using the privacy screens and tall posters and showcase the robot using bimanual manipulation plus human avoidance navigation plugin to avoid oncoming humans. Exact details of this demo are still being constructed for the final version and requirements shall be updated.
Tasks needed:
To the people not mentioned on this issue (@PasMarra, @vigisushrutha23 ) think what demo could you propose, if not for this year, for the next one.
For anybody: feel free to edit the issue to add other task, or expand them.
The text was updated successfully, but these errors were encountered: