Skip to content

Commit

Permalink
Merge in various jupyter notebooks from the papers (#20)
Browse files Browse the repository at this point in the history
* non-corl additions

* corl additions

* refactor the point cloud dataset so it can be swapped out

* partially complete dataset

* replaced with cfg stuff

* enum

* first version of training rlbench

* changes to some downloads

* small changes to pretraining

* pretraining works for all objects now

* initial attempt at ndf pretraining

* add a bunch of commands and configs

* launch script

* amend

* additional stuff

* lots of extra goodies...

* got training and downloading going

* added bottle place

* added remaining things

* add open3d dependency

* add pretraining options

* add in a config file

* add in a config file

* fix bottle config

* fix bottle config

* add taxpose training scripts

* fix logic bug

* add in multilateration

* add new commands for cluster

* new reupload command

* new reupload command

* some nice ablations

* added commands

* state of the repo corl 2023 rebuttal

* bump training time

* decrease

* fixed some of the eval stuff for grasping...

* redid the symmetry

* remove symmetry from mug stuff

* improved inference

* made some changes to make the occlusions random and reduce training time

* drop to fit on gpus

* skip if things get weird

* resubmit for iclr 2024

* some rlbench stuff

* added some rlbench stuff

* additional stuff for training ndf

* remove cruft

* some additional

* some additional

* new mlat config

* mlat

* boost all the training

* add all the configs

* small changes

* small changes

* small changes

* small changes

* reduce workers

* fewer resources

* updated configs

* add new symmetry

* add in symmetry to model

* update the config to use max points

* add puzzle training

* added new tasks

* add evaluations

* ablation

* iclr 2024 rebuttal

* pick and place works well

* eval works

* pytest works

* move rlbench around

* move train

* configs look good

* huge change to the config structure to enable better scaling to rlbench

* nice new training commands

* Adding back NDF dataset support with some feature additions

* Use occlusion_cfg itself to check whether to apply occlusions

* fix rlbench eval

* add in new phasing code

* multi-phase eval first pass

* multi-phase dataset works

* tried out some dumb stuff for compression, couldn't get binary files which diffed properly

* bunch of configs

* some additional configs

* fixed dockerfile and launch

* pick-and-lift task

* added the remaining tasks

* Add training and eval for all the different things

* run some rollouts

* not sure what else

* update commands

* fix...

* update some configs

* fix up all commands, and update a bunch of evals. also add support for a bunch of differnt stuff

* make things bigger

* update filtering to make it tractible

* fix downsampling to upsample

* cache less

* stop the initial iteration

* bump cache size

* parallel iterate first

* makes sense

* working containerization acceleration

* edits

* update launch to not crash

* dum

* moredum

* it

* it

* examples of how to do everything

* some things done

* some stuff works

* some parts work, but not all stages

* nice viz

* remove dupes

* remove more dupes

* remove more dupes

* remove old commands, duplicated

* deleted more redunant or sloppy stuff

* even more detrituts

* remove duplicated sample efficiency evals

* remove duplicated datasets

* remove duplicated test

* remove outdated solve_puzzle and some unused defaults

* remove some taxpose_all

* remove further taxpose_all configs

* remove more extra configs which are dupes

* remove rmaining ones

* remove eval

* remove model

* add in reverted Dockerfile stuff

* delete one of the dumb things

* delete the autobot instruction

* don't need the symmetry thing anymore

* no reupload

* no longer need this rlbench

* remove the old eval stuff

* somehow this got removed

* revert whitesp[ace

* rename the remaining stuff

---------

Co-authored-by: oadonca <[email protected]>
Co-authored-by: Octavian Donca <[email protected]>
  • Loading branch information
3 people authored May 14, 2024
1 parent 70a5f1f commit 2d2dc2c
Show file tree
Hide file tree
Showing 15 changed files with 4,121 additions and 44 deletions.
149 changes: 149 additions & 0 deletions notebooks/aggregate_rlbench_results.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Aggregate the various rlbench evals into a single dataframe which can be easily copied."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import wandb\n",
"import pandas as pd\n",
"import json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# For the following wandb run ids, download the results tables and present them as a df.\n",
"\n",
"# no colliison checking,\n",
"# run_ids = [\n",
"# \"wrf9hzpf\",\n",
"# \"me0cnlhq\",\n",
"# \"3a3l59af\",\n",
"# \"4vc2ogr4\",\n",
"# \"ca47vr4g\",\n",
"# \"6kfacxc2\",\n",
"# \"yz9f3xv7\",\n",
"# \"4cn8q3ch\",\n",
"# \"ieyeei8l\",\n",
"# \"jxl4v41h\",\n",
"# ]\n",
"\n",
"\n",
"# These are run ids for runs with action repeat, no collision-checking\n",
"run_ids = [\n",
" \"qw5uiwkh\",\n",
" \"g7eftjyc\",\n",
" \"40d2zf1f\",\n",
" \"1few52rl\",\n",
" \"4gt6apgc\",\n",
" \"7tnbl966\",\n",
" \"532p3esh\",\n",
" \"ztfm27yt\",\n",
" \"xuwwkznq\",\n",
" \"3qz1uzpj\",\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the second run\n",
"def get_results_table(run_id):\n",
" api = wandb.Api()\n",
" json_file = api.artifact(f'r-pad/taxpose/run-{run_id}-results_table:v0').get_entry('results_table.table.json').download()\n",
" with open(json_file) as file:\n",
" json_dict = json.load(file)\n",
" return pd.DataFrame(json_dict[\"data\"], columns=json_dict[\"columns\"])\n",
"\n",
"# Get the config from the run.\n",
"def get_config(run_id):\n",
" api = wandb.Api()\n",
" run = api.run(f'r-pad/taxpose/{run_id}')\n",
" return run.config\n",
"\n",
"df = get_results_table(run_ids[1])\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"display(df)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for run_id in run_ids:\n",
" cfg = get_config(run_id)\n",
" print(f\"Run ID: {run_id}\")\n",
" print(f\"Task: {cfg['task']['name']}\")\n",
" try:\n",
" df = get_results_table(run_id)\n",
" display(df)\n",
" print(\"\\n\\n\")\n",
" except Exception as e:\n",
" print(\"did not complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "taxpose_repro",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
127 changes: 127 additions & 0 deletions notebooks/explore_rlbench_dataset.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Explore the RLBench dataset (which we made)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import torch\n",
"import numpy as np\n",
"\n",
"\n",
"from taxpose.datasets.rlbench import RLBenchPointCloudDataset, RLBenchPointCloudDatasetConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dset = RLBenchPointCloudDataset(RLBenchPointCloudDatasetConfig(\n",
" dataset_root=os.path.expanduser(\"/data/rlbench10/\"),\n",
" task_name=\"stack_wine\",\n",
" episodes=list(range(1, 5)),\n",
" phase=\"all\",\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from rpad.visualize_3d.plots import segmentation_fig\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = dset[19]\n",
"\n",
"segmentation_fig(\n",
" data=np.concatenate([data[\"points_action\"][0], data[\"points_anchor\"][0]], axis=0),\n",
" labels=np.concatenate([np.zeros(data[\"points_action\"].shape[1]), np.ones(data[\"points_anchor\"].shape[1])], axis=0).astype(np.int32),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.concatenate([data[\"points_action\"][0], data[\"points_anchor\"][0]], axis=0).shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.concatenate([np.zeros(data[\"points_action\"].shape[1]), np.ones(data[\"points_anchor\"].shape[1])], axis=0).shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data[\"phase\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading

0 comments on commit 2d2dc2c

Please sign in to comment.