First, install Conda by following the instructions on the Conda website or the Miniconda website (here using Miniconda).
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
After installing, initialize your newly-installed Miniconda. The following commands initialize for bash and zsh shells:
~/miniconda3/bin/conda init bash
~/miniconda3/bin/conda init zsh
To activate the changes, restart your shell or run:
source ~/.bashrc
source ~/.zshrc
Create a new Conda environment by running:
conda create -n imitation-juicer python=3.8 -y
Activate the environment by running:
conda activate imitation-juicer
Once installed and activated, make some compatibility changes to the environment by running:
pip install setuptools==65.5.0
pip install --upgrade pip wheel==0.38.4
pip install termcolor
Download the IsaacGym installer from the IsaacGym website and follow the instructions to download the package by running (also refer to the FurnitureBench installlation instructions):
- Click "Join now" and log into your NVIDIA account.
- Click "Member area".
- Read and check the box for the license agreement.
- Download and unzip
Isaac Gym - Ubuntu Linux 18.04 / 20.04 Preview 4 release
.
Once the zipped file is downloaded, move it to the desired location and unzip it by running:
tar -xzf IsaacGym_Preview_4_Package.tar.gz
Now, you can install the IsaacGym package by navigating to the isaacgym
directory and running:
pip install -e python --no-cache-dir --force-reinstall
Note: The --no-cache-dir
and --force-reinstall
flags are used to avoid potential issues with the installation we encountered.
Note: Please ignore Pip's notice that [notice] To update, run: pip install --upgrade pip
as the current version of Pip is necessary for compatibility with the codebase.
Tip: The documentation for IsaacGym is located inside the docs
directory in the unzipped folder and is not available online. You can open the index.html
file in your browser to access the documentation.
You can now safely delete the downloaded zipped file and navigate back to the root directory for your project.
To allow for data collection with the SpaceMouse, etc. we used a custom fork of the FurnitureBench code. The fork is included in this codebase as a submodule. To install the FurnitureBench package, first run:
git clone --recursive [email protected]:ankile/imitation-juicer.git
Note: If you forgot to clone the submodule, you can run git submodule update --init --recursive
to fetch the submodule.
Then, install the FurnitureBench package by running:
cd imitation-juicer/furniture-bench
pip install -e .
To test the installation of FurnitureBench, run:
python -m furniture_bench.scripts.run_sim_env --furniture one_leg --scripted
This should open a window with the simulated environment and the robot in it.
If you encounter the error ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory
, this might be remedied by adding the conda environment's library path to the LD_LIBRARY_PATH
environment variable. This can be done by, e.g., running:
export LD_LIBRARY_PATH=YOUR_CONDA_PATH/envs/YOUR_CONDA_ENV_NAME/lib
Finally, install the ImitationJuicer package by running:
cd ..
pip install -e .
To make data collection with the SpaceMouse possible, you need to install the SpaceMouse driver:
sudo apt install libspnav-dev spacenavd
Then, the SpaceMouse driver needs to run in the background. To start the driver, run:
sudo systemctl start spacenavd
Depending on what parts of the codebase you want to run, you may need to install additional dependencies. Especially different vision encoders might require additional dependencies. To install the R3M or VIP encoder, respectively, run:
pip install -e imitation-juicer/furniture-bench/r3m
pip install -e imitation-juicer/furniture-bench/vip
The Spatial Softmax encoder and BC_RNN policy requires the robomimic
package to be installed:
git clone https://github.com/ARISE-Initiative/robomimic.git
cd robomimic
pip install -e .
We provide a Google Drive folder that contains a zip file with the raw data and a zip file with the processed data. Download the data.
The data files can be unzipped by running:
tar -xzvf imitation-juicer-data-raw.tar.gz
tar -xzvf imitation-juicer-data-processed.tar.gz
Then, for the code to know where to look for the data, please set the environment variables DATA_DIR_RAW
and DATA_DIR_PROCESSED
to the paths of the raw and processed data directories, respectively. This can be done by running or adding the following lines to your shell configuration file (e.g., ~/.bashrc
or ~/.zshrc
):
export DATA_DIR_RAW=/path/to/raw-data
export DATA_DIR_PROCESSED=/path/to/processed-data
In the above example the folders contained in the twro zipped files, raw
and processed
, should be placed immediately inside the above folder, e.g., /path/to/raw-data/raw
and /path/to/processed-data/processed
.
All parts of the code (data collection, training, evaluation rollout storage, data processing, etc.) uise these environment variables to locate the data.
Note: The code uses the directory structure in the folders to locate the data. If you change the directory structure, you may need to update the code accordingly.
This README outlines the workflow for collecting demonstrations, annotating them, augmenting trajectories, training models, and evaluating the trained models. Below are the steps involved in the process.
The below instructions assume that the user has set environment variables for the raw data directory (DATA_DIR_RAW
) and the processed data directory (DATA_DIR_PROCESSED
). The raw data directory contains the raw demonstration data as one .pkl
(possibly .pkl.xz
if compressed, which is handled automatically) file per trajectory, while the processed data directory contain .zarr
files with the processed data ready for training with multiple trajectories for each dataset.
To collect data, start by invoking the simulated environment. Input
actions are recorded using the 3DConnextion SpaceMouse. The source code
and command line arguments are available in
src/data_collection/teleop.py
. An example command for collecting
demonstrations for the one_leg
task is:
python -m src.data_collection.teleop --furniture one_leg --num-demos 10 --randomness low
Optionally you can add the flag --save-failure
to also store failed trajectories and the flag --no-ee-laser
will remove the red laser from the end-effector from the viewer (it's not rendered in the camera views either way).
Demonstrations are saved as .pkl
files at:
$DATA_DIR_RAW/raw/sim/one_leg/teleop/low/success/
By default, only successful demonstrations are stored. Failures can be
stored by using the --save-failure
flag. The --no-ee-laser
flag
disables the assistive red light.
To collect data, control the robot with the SpaceMouse. To store an
episode and reset the environment, press t
. To discard an episode,
press n
. To "undo" actions, press b
. To toggle recording on and
off, use c
and p
, respectively.
Before trajectory augmentation, demos must be annotated at bottleneck
states. Use src/data_collection/annotate_demo.py
for this purpose.
Here's how to invoke the tool:
python -m src.data_collection.annotate_demo --furniture one_leg --rest-of-arguments-tbd
Use k
and j
to navigate frames, and l
and h
for faster
navigation. Press space
to mark a frame and u
to undo a mark. Press
s
to save and move to the next trajectory.
After annotation, use src/data_collection/backward_augment.py
to
generate counterfactual snippets. Example command:
python -m src.data_collection.backward_augment --furniture one_leg --randomness low --demo-source teleop
Optionally, adding the flag --no-filter-pickles
will skip the step that filters out only trajectories that have been annotated, which speeds up the process in cases where all trajectories in the directory have been annotated.
New demonstrations are stored at:
$DATA_DIR_RAW/raw/sim/one_leg/augmentation/low/success/
Train models using src/train/bc.py
. We use Hydra and OmegaConf for
hyperparameter management. Ensure WandB authentication before starting.
To train for the one_leg
task:
python -m src.train.bc +experiment=image_baseline furniture=one_leg
For a debug run, add dryrun=true
. For rollouts during training, add
rollout=rollout
.
Evaluate trained models with src/eval/evaluate_model.py
. For example:
python -m src.eval.evaluate_model --run-id entity/project/run-id --furniture one_leg --n-envs 10 --n-rollouts 10 --randomness low
Optionally, adding the flag --save-rollouts
will store the rollout trajectories to the raw data directory and adding --wandb
will write the success rate numbers back to the WandB run. If using the --wandb
option, you can optionally filter out only finished runs to evaluate with --run-state finished
and decide how to deal with runs that have already been evaluated with, e.g., --if-exists append
(other options are overwrite
and skip
).
If you find the paper or the code useful, please consider citing the paper:
@misc{ankile2024juicer,
title={JUICER: Data-Efficient Imitation Learning for Robotic Assembly},
author={Lars Ankile and Anthony Simeonov and Idan Shenfeld and Pulkit Agrawal},
year={2024},
eprint={2404.03729},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Project based on the cookiecutter data science project template. #cookiecutterdatascience