pip install -e .
Generate hanging points using hanging_points_generator.
If you use ycb to generate hanging points
run-many 'python generate_hanging_points.py'
you can get contact_points.json like
<path to ycb urdf> /019_pitcher_base/contact_points/pocky-2020-08-14-18-23-50-720607-41932/contact_points.json
Render the training image by loading contact points and textures.
Can be executed in parallel using eos run-many.
cd hangning_points_cnn/create_dataset
run-many 'python renderer.py -n 200 -i <path to ycb urdf> -s <save dir> --random-texture-path <random_texture_path>' -j 10 -n 10
Use visualize-function-points
app.
visualize-function-points -h
INFO - 2020-12-28 01:56:36,367 - topics - topicmanager initialized
pybullet build time: Sep 14 2020 02:23:24
usage: visualize-function-points [-h] [--input-dir INPUT_DIR] [--idx IDX]
[--large-axis]
optional arguments:
-h, --help show this help message and exit
--input-dir INPUT_DIR, -i INPUT_DIR
input urdf (default: /media/kosuke55/SANDISK-2/meshdat
a/ycb_eval/019_pitcher_base/pocky-2020-10-17-06-01-16-
481902-45682)
--idx IDX data idx (default: 0)
--large-axis, -la use large axis as visualizing marker (default: False)
INFO - 2020-12-28 01:56:39,356 - core - signal_shutdown [atexit]
left: hanging points right: pouring points |
Specify the model config and the save path of the generated data
cd hanging_points_cnn/learning_scripts
python train_hpnet.py -g 2 -c config/gray_model.yaml -bs 16 -dp <save dir>
./start_server.sh
Use infer-function-points
app.
infer-function-points -h
INFO - 2020-12-29 22:41:01,673 - topics - topicmanager initialized
usage: infer-function-points [-h] [--input-dir INPUT_DIR] [--color COLOR]
[--depth DEPTH] [--camera-info CAMERA_INFO]
[--pretrained_model PRETRAINED_MODEL]
[--predict-depth PREDICT_DEPTH] [--task TASK]
optional arguments:
-h, --help show this help message and exit
--input-dir INPUT_DIR, -i INPUT_DIR
input directory (default: None)
--color COLOR, -c COLOR
color image (.png) (default: None)
--depth DEPTH, -d DEPTH
depth image (.npy) (default: None)
--camera-info CAMERA_INFO, -ci CAMERA_INFO
camera info file (.yaml) (default: None)
--pretrained_model PRETRAINED_MODEL, -p PRETRAINED_MODEL
Pretrained models (default: /media/kosuke55/SANDISK-2/
meshdata/shapenet_pouring_render/1218_mug_cap_helmet_b
owl/hpnet_latestmodel_20201219_0213.pt)
--predict-depth PREDICT_DEPTH, -pd PREDICT_DEPTH
predict-depth (default: 0)
--task TASK, -t TASK h(hanging) or p(pouring)
Not needed if roi size is the same in config. (default: h)
For multiple data.
infer-function-points -i <input directory> -p <trained model>
For specific data.
infer-function-points -c <color.png> -d <depth.npy> -ci <camera_info.yaml> -p <trained model>
Download ycb rgbd with annotation and trained model.
ycb_real_eval
pretrained_model
@inproceedings{takeuchi_icra_2021,
author = {Takeuchi, Kosuke and Yanokura, Iori and Kakiuchi, Yohei and Okada, Kei and Inaba, Masayuki},
booktitle = {ICRA},
month = {May},
title = {Automatic Hanging Point Learning from Random Shape Generation and Physical Function Validation},
year = {2021},
}
@inproceedings{takeuchi_iros_2021,
author = {Takeuchi, Kosuke and Yanokura, Iori and Kakiuchi, Yohei and Okada, Kei and Inaba, Masayuki},
booktitle = {IROS},
month = {September},
title = {Automatic Learning System for Object Function Points from Random Shape Generation and Physical Validation},
year = {2021},
}