# D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions
Paper | Video | Project Page
Official code release for the CVPR 2022 paper D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions.
๐๐ผ ArtiGrasp - Learns two-handed grasping and articulation of objects.
๐ง๐ป Full-Body Grasping - Learns full-body grasping and manipulation of objects, including trajectory-following.
In the latest commit, RaiSim was updated to the newest version (1.1.6). Training D-Grasp is more stable and faster now (see RaiSim changelog). To get an already existing code-base to run, it should be re-built from scratch since the simulation backbone and some of the wrapper files and functions have been updated.
This code was tested with Python 3.8 on Ubuntu 18.04 and 20.04 and an NVIDIA GeForce RTX 2080 Ti. You require about 1.8GB of space because the repository comes with all the features of the RaiSim physics simulation, as D-Grasp is integrated into RaiSim.
The D-Grasp related code can be found in the raisimGymTorch subfolder.
For good practice for Python package management, it is recommended to use virtual environments (e.g., virtualenv
or conda
) to ensure packages from different projects do not interfere with each other.
For installation, see and follow our documentation of the installation steps under docs/INSTALLATION.md. Note that you need to get a valid, free license for the RaiSim physics simulation and an activation key via this link.
We provide some pretrained models to view the output of our method. They are stored in this folder. For interactive visualizations, you need to open the unity executable in the raisimUnity folder
-
To visualize the policy for motion synthesis, run the following from the raisimGymTorch folder:
python raisimGymTorch/env/envs/dgrasp/runner_motion.py -ao -e 'all_objs' -sd 'pretrained_policies' -w 'full_6000.pt'
The visualization should look like this, with randomly sampled objects and sequences:
To just visualize one object, add the flag
-o <obj_id>
. -
If you want to run the policy that was trained for a single object, run the following command from the raisimGymTorch folder:
python raisimGymTorch/env/envs/dgrasp/runner_motion.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt'
-
Similarly, policies can be run for the other objects. The commands for all pretrained policies are stored in this file. (The missing models will be added over the coming days).
-
To train a new policy from scratch with labels from DexYCB (Chao et. al, CVPR 2021), you need to run the runner.py file from within the raisimGymTorch folder as follows:
python raisimGymTorch/env/envs/dgrasp/runner.py -o <obj_id> -e <exp name> -d <experiment path> -sd <storage folder>
where
-o
indicates the object id according to DexYCB.-d
is the path the RaiSim data should be stored and-sd
indicates in which folder you want your current batch of experiments to be stored. By default, the experiments will be stored in the raisimGymTorch directory in a folder calleddata_all
. -
If you want to train a single policy over all objects, run the following:
python raisimGymTorch/env/envs/dgrasp/runner.py -ao -e <exp name> -d <experiment path> -sd <storage folder> -nr 1 -itr 6001
-
Once you have a trained policy and want to evaluate it, you can run the following commands.
python raisimGymTorch/env/envs/dgrasp_test/runner.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder>
-
For example, running the following command:
python raisimGymTorch/env/envs/dgrasp_test/runner.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt'
Should yield the following output (different hardware may lead to slight deviations for the displacement values):
---------------------------------------------------- object: 12 success: 1.000 disp mean: 0.282 disp std: 0.092 ----------------------------------------------------
-
Note if you want the policy trained over all object to be evaluated as such,
-o <obj_id>
can be replaced by-ao
. For example, evaluating the policy trained on all objects (compare with main paper) can be run with:python raisimGymTorch/env/envs/dgrasp_test/runner.py -ao -e 'all_objs' -sd 'pretrained_policies' -w 'full_6000.pt'
The last rows of the terminal output should look like this (different hardware may lead to slight deviations for the displacement values):
---------------------------------------------------- all objects total success rate: 0.777 disp mean: 4.850 disp std: 9.337 ----------------------------------------------------
If you want interactive visualizations, you need to open the unity executable in the raisimUnity folder. The videos of the sequences will be stored within the raisimUnity/<OS>/Screenshot
folder. Note that this does not work on headless servers.
-
This command visualizes the experiment where the surface is removed to see whether an object slips:
python raisimGymTorch/env/envs/dgrasp_test/runner.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder> -ev
-
For example, the command:
python raisimGymTorch/env/envs/dgrasp_test/runner.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt' -ev
should yield the following output:
-
This command visualizes the experiment where the object is moved to a target 6D pose:
python raisimGymTorch/env/envs/dgrasp/runner_motion.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder>
We provide the possibility to generate data with our framework and customized 6D target goal spaces.
To cite us, please use the following:
@inproceedings{christen2022dgrasp,
title={D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions},
author={Christen , Sammy and Kocabas, Muhammed and Aksan, Emre and Hwangbo, Jemin and Song, Jie and Hilliges, Otmar},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
See the following license.