Skip to content

liruiw/GA-DDPG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GA-DDPG

[website, paper]

image

Installation

git clone https://github.com/liruiw/GA-DDPG.git --recursive
  1. Setup: Ubuntu 16.04 or above, CUDA 10.0 or above, python 2.7 / 3.6

    • (Required for Training) - Install OMG submodule and reuse conda environment.
    • (Docker) See OMG Docker for details.
    • (Demo) - Install GA-DDPG inside a new conda environment
      conda create --name gaddpg python=3.6.9
      conda activate gaddpg
      pip install -r requirements.txt
      
  2. Install PointNet++

  3. Download environment data bash experiments/scripts/download_data.sh

Pretrained Model Demo

  1. Download pretrained models bash experiments/scripts/download_model.sh
  2. Demo model test bash experiments/scripts/test_demo.sh
Example 1 Example 2

Save Data and Offline Training

  1. Download example offline data bash experiments/scripts/download_offline_data.sh The .npz dataset (saved replay buffer) can be found in data/offline_data and can be loaded for training (there are several deprecated attributes). The image version of the offline buffer can be found here.
  2. To save extra gpus for online rollouts, use the offline training script bash ./experiments/scripts/train_offline.sh bc_aux_dagger.yaml BC
  3. Saving dataset bash ./experiments/scripts/train_online_save_buffer.sh bc_save_data.yaml BC.

Online Training and Testing

  1. We use ray for parallel rollout and training. The training scripts might require adjustment according to the local machine. See config.py for some notes.
  2. Training online bash ./experiments/scripts/train_online_visdom.sh td3_critic_aux_policy_aux.yaml DDPG. Use visdom and tensorboard to monitor.
  3. Testing on YCB objects bash ./experiments/scripts/test_ycb.sh demo_model. Replace demo_model with trained models. Logs and videos would be saved to output_misc

Note

  1. Checkout core/test_realworld_ros_final.py for an example of real-world usages.
  2. Related Works (OMG, ACRONYM, 6DGraspNet, 6DGraspNet-Pytorch, ContactGraspNet, Unseen-Clustering)
  3. To use the full Acronym dataset with Shapenet meshes, please follow ACRONYM to download the meshes and grasps and follow OMG-Planner to process and save in /data. filter_shapenet.json can then be used for training.
  4. Please use Github issue tracker to report bugs. For other questions please contact Lirui Wang.

File Structure

├── ...
├── GADDPG
|   |── data 		# training data
|   |   |── grasps 		# grasps from the ACRONYM dataset
|   |   |── objects 		# object meshes, sdf, urdf, etc
|   |   |── robots 		# robot meshes, urdf, etc
|   |   └── gaddpg_scenes	 	# test scenes
|   |── env 		# environment-related code
|   |   |── panda_scene 		# environment and task
|   |   └── panda_gripper_hand_camera 		# franka panda with gripper and camera
|   |── OMG 		# expert planner submodule
|   |── experiments 		# experiment scripts
|   |   |── config 		# hyperparameters for training, testing and environment
|   |   |── scripts 		# main running scripts
|   |   |── model_spec 		# network architecture spec
|   |   |── cfgs 		# experiment config and hyperparameters
|   |   └── object_index 		# object indexes
|   |── core 		# agents and learning
|   |   |──  train_online 		# online training
|   |   |──  train_test_offline 	# testing and offline training
|   |   |──  network 		# network architecture
|   |   |──  test_realworld_ros_final 		# real-world script example
|   |   |──  agent 		# main agent code
|   |   |──  replay_memory 		# replay buffer
|   |   |──  trainer 	# ray-related training setup
|   |   └── ...
|   |── output 		# trained model
|   |── output_misc 	# log and videos
|   └── ...
└── ...

Citation

If you find GA-DDPG useful in your research, please consider citing:

@inproceedings{wang2021goal,
author    = {Lirui Wang, Yu Xiang, Wei Yang, Arsalan Mousavian, and Dieter Fox},
title     = {Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds},
booktitle = {The Conference on Robot Learning (CoRL)},
year      = {2021}
}

License

The GA-DDPG is licensed under the MIT License.