-
Notifications
You must be signed in to change notification settings - Fork 1.3k
I have one question about densepose result! #74
Comments
Same question here |
me 3 |
@lushihan where..? sorry.. |
@yagee97 Oh I mean I have the same questions as yours |
@lushihan oh... i got it now.. i want to solve this question. |
Still no authors response :( |
Thank you for your reply! What information can I use in densepose data? because I'm going to utilize person's 2d/3d coordinate at densepose result. How I get 2D/3D coordinate? Where? Thank you! :) Have a good day! |
The output of the DensePose head is generated here. You can see that for every detected person bounding box of size |
oh really thank you.! i'v got it this problem. So let me ask you one last question. How can I print or store them separately? very thank you for your kind! :) |
There are many possibilities to store results, for example pickle, numpy, json. For visualization, I suggest you to check visualization and texture transfer notebooks |
I want to use my own image to generate I,U,V and to visualize on the SMPL mode. Then I got the IUV output [3,H,W]. and the output[0], output[1] ,output[2] is correspondent to I,U,V respectively. However, when I reference the DensePose-COCO-on-SMPL.ipynb, in the demo_dp_single_ann.pkl, where I,U,V is a vector(length 125). So, my question is how can I use the output[3,H,W] to do the visualization, or can you provide the code to generate the |
Same question as anweiwei, how do we generate that demo_dp_single_ann.pkl file or do the visualisation given the IUV output for an arbritary image |
@ingramator @anweiwei Did you manage to get the xyz from iuv output? do share your code. maybe we can collab and find a fix? I have the IUV output from the model. Cant make sense of it. |
@jaggernaut007 I have this working now are you still interested in seeing it? |
@ingramator Hi, how do you make it working? My IUV output from infer_simple.py doesn't seem to fit well when mapping it to the SMPL model. Could you please share the script? |
@kalyo-zjl @jaggernaut007 check the pull request #99 it provides an excellent sample notebook that shows how its done! I am at this stage trying to work backwards. For instance how do I map a specific vertex on the SMPL model to RGB input image. Does anyone have any ideas? |
@ingramator Thank you! |
@ingramator this is not straightforward. What you're up to is 3D reconstruction based on 2D manifold coordinates. This can be done through reprojection error minimization for visible parts. You can try looking into bundle adjustment, ceres from Google can be a good starting point |
the same problem , so do you have any ideas? |
Hi guys, I am following the notebook of https://github.com/facebookresearch/DensePose/pull/99 |
I succeeded densepose test for video with reference to (https://github.com/trrahul/densepose-video).
while tests, i got a question.
1. When I detect an object using densepose, where are the keypoints' coordination stored?
(which variable?)
Example) get_keypoints() in keypoints.py ?
2. What information can I use in densepose data?
Thank you! i want to your detail opinion.
The text was updated successfully, but these errors were encountered: