diff --git a/index.html b/index.html new file mode 100644 index 0000000..a0f0470 --- /dev/null +++ b/index.html @@ -0,0 +1,300 @@ + + + + + + + + ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+

ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation

+
+ + Hui Zhang1,2*, + + Sammy Christen1*, + + Zicong Fan1,2, + + + Luocheng Zheng1, + +
+ + Jemin Hwangbo3, + + + Jie Song1, + + + Otmar Hilliges1 + +
+
+ 1Department of Computer Science, ETH Zurich, Switzerland +
+ 2Max Planck Institute for Intelligent Systems, Tübingen, Germany +
+ 3Department of Mechanical Engineering, KAIST, Korea +
+ +
+ *Equal Contribution +
+ +
+ Accepted by 3DV 2024 as Spotlight Presentation +
+ +
+ +
+
+
+
+ + + + + + + +
+
+
+
+

Video

+
+ +
+

+ ArtiGrasp is a method to synthesize physically plausible bi-manual manipulation. It can generate motion sequences +such as grasping and relocating an object with one or two hands, and opening it to a target articulation angle. +

+
+
+
+
+ +
+
+ +
+
+

Abstract

+
+

+ We present ArtiGrasp, a novel method to synthesize bi-manual hand-object interactions that include grasping and articulation. This task is challenging due to the diversity of the global wrist motions and the precise finger control that are necessary to articulate objects. ArtiGrasp leverages reinforcement learning and physics simulations to train a policy that controls the global and local hand pose. Our framework unifies grasping and articulation within a single policy guided by a single hand pose reference. Moreover, to facilitate the training of the precise finger control required for articulation, we present a learning curriculum with increasing difficulty. It starts with single-hand manipulation of stationary objects and continues with multi-agent training including both hands and non-stationary objects. To evaluate our method, we introduce Dynamic Object Grasping and Articulation, a task that involves bringing an object into a target articulated pose. This task requires grasping, relocation, and articulation. We show our method's efficacy towards this task. We further demonstrate that our method can generate motions with noisy hand-object pose estimates from an off-the-shelf image-based regressor. +

+
+
+
+ + + + + + + + + + + + + +
+
+ + + +
+
+

BibTeX

+
@inProceedings{zhang2024artigrasp,
+  title={{ArtiGrasp}: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation},
+  author={Zhang, Hui and Christen, Sammy and Fan, Zicong and Zheng, Luocheng and Hwangbo, Jemin and Song, Jie and Hilliges, Otmar},
+  booktitle={International Conference on 3D Vision (3DV)},
+  year={2024}
+}
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +