Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mano parameters #16

Open
anilesec opened this issue Feb 7, 2022 · 8 comments
Open

Mano parameters #16

anilesec opened this issue Feb 7, 2022 · 8 comments

Comments

@anilesec
Copy link

anilesec commented Feb 7, 2022

@samarth-robo issue continuing from an email conversation

Hi Samarth,
​
I have one more question, it would be a great help if you could clarify it as well. Thanks in Advance! 
1. I see there are only 6 mano_fits_**.json given for each sequence. How can we obtain the mano parameters of hand in each frame? More precisely, Is there a way to get mano axis angle parameters for each frame in the contactpoose dataset?
​
Looking forward to hearing from you!

Hi Anil,
 
Those 6 json files represent 6 different MANO parameter sizes. The MANO model allows you to represent the hand pose with different parameter sizes through PCA.
 
They don't correspond to frame numbers.
 
Regardless of which one you choose from those 6, you can get that hand model for each frame in the sequence. Please see the [mano_meshes()](https://github.com/facebookresearch/ContactPose/blob/main/utilities/dataset.py#L300) function of the ContactPose dataset.
 
These meshes are in the object coordinate frame. Then you can use [ContactPose.object_pose()](https://github.com/facebookresearch/ContactPose/blob/main/utilities/dataset.py#L278) to transform them into the camera coordinate frame.
 
You can see a working example of all this in [this demo notebook](https://github.com/facebookresearch/ContactPose/blob/main/rendering.ipynb), where the posed MANO meshes are used to render hand masks in the image.
 
Please use GitHub issues for these questions, so the answers are publicly documented and others can see them later if they have the same questions.

@anilesec
Copy link
Author

anilesec commented Feb 7, 2022

Thanks, @samarth-robo for the response.
I think I understand what you mean. This means that MANO pose and shape parameters are the same for all the frames (as hand grasp is fixed) however to get the global orientation of hand in each frame, we will have to use the rotation component of the ContactPose.object_pose().

@anilesec
Copy link
Author

anilesec commented Feb 7, 2022

I understood how to get the meshes(of hand) for each frame. But, I am interested in mano parameters for each frame( the meshes of each frame are trivial for my application). Also, in mano_fits_10.json there are 13 parameters where the last three parameters correspond to the global orientation of the hand. Does this global orientation correspond to the orientation of the hand in the first frame of the sequence or is it random orientation?

Thank you!

@samarth-robo
Copy link
Collaborator

Thanks, @samarth-robo for the response. I think I understand what you mean. This means that MANO pose and shape parameters are the same for all the frames (as hand grasp is fixed) however to get the global orientation of hand in each frame, we will have to use the rotation component of the ContactPose.object_pose().

correct

@samarth-robo
Copy link
Collaborator

samarth-robo commented Feb 7, 2022

I understood how to get the meshes(of hand) for each frame. But, I am interested in mano parameters for each frame( the meshes of each frame are trivial for my application). Also, in mano_fits_10.json there are 13 parameters where the last three parameters correspond to the global orientation of the hand. Does this global orientation correspond to the orientation of the hand in the first frame of the sequence or is it random orientation?

Thank you!

no it is not a random rotation. It is a necessary output of the optimizer that optimizes the MANO params for L2 distance of joint 3D locations from ground truth.

Here is the chain of transforms if you are curious:

Denote the output dict of ContactPose.mano_params() by mp, and start with a hand vertex m0_p. The coordinate system m0 corresponds to MANO's PCA vertex regressor (internal detail you can understand if you read the MANO paper).

  1. rotation by 3 parameters in mp['pose']) -> hand vertex in MANO coordinates m_p
  2. apply mp['hTm'] -> hand vertex in hand coordinates h_p. mp['hTm'] is the inverse of mTc and mTc is a rigid body transform I remove before constructing the optimization objective, to make the optimization easier.
  3. apply ContactPose._oTh -> hand vertex in object coordinates o_p. oTh is usually identity, but it can be different if the hand is dynamic w.r.t. object, which happens in some double-handed grasps
  4. apply cTo, the output of ContactPose.object_pose() -> hand vertex in camera coordinates c_p

This is just FYI. You don't need to worry about applying steps 1, 2, and 3: ContactPose.mano_meshes() does that for you.

@anilesec
Copy link
Author

anilesec commented Feb 7, 2022

Thanks for elucidating! I will use the per-frame MANO parameters(with object pose as the global orientation of each frame) and see how it works.

@gs-ren
Copy link

gs-ren commented Jun 1, 2023

@anilesec Hi, could you share how to get the mano pose for each frame?

@anilesec
Copy link
Author

anilesec commented Jun 1, 2023

@gs-ren you can find the info here #16 (comment)

@gs-ren
Copy link

gs-ren commented Jun 15, 2023

@gs-ren you can find the info here #16 (comment)
Could you describe step 4 process in detail please? I want to get the pose, 3d points and vertex in camera coordinate. Thank you! @anilesec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants