-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queations about imitation inference #86
Comments
Hello
the agent motion is unnormal, here is the gif and my conda env pip list : absl-py 2.1.0 |
Hi! Thanks for your interest in my work. PHC is a motion-tracking policy that imitates per-frame reference motion. This formulation allows it to imitate many different types of motion (unlike Deepmimic/AMP, which is tethered to one type of motion). The reward for training PHC is to get as close to the reference motion as possible. As a result, PHC does not really try to "fix" foot sliding as its primary goal is to "imitate" the reference motion. It will try to imitate the foot sliding motion as much as it can using physically plausible actions. The foot jittering and high-frequency motion you are observing is PHC trying to achieve balance while fitting the reference motion that has the foot sliding. As for your second video it looks like the agent is imitating the reference motion decently? The reference (red dots) is just standing still without moving. |
@dbdxnuliba You can try pressing the "M" key and then "J" in the window, which is also described in the readme, and you can see the motion of character getup. |
@ZhengyiLuo I also realized this problem: the physics-based frame-by-frame imitate method does not seem to be suitable for use as a post-processing module to optimize non-physical phenomena in motion. I read some other papers, such as PhysDiff, which added physics modules to the training process of motion generation, but the efficiency was very low. In addition, do you have any other suggestions? Looking forward to your insights and replies. |
@PeterWangyi Michael Black had a paper about that. https://www.youtube.com/watch?v=Dufvp_O0ziU |
@PeterWangyi ,thanks for your answer ,and may I ask you another quesition about how can you get your motion file , and what's the full command do you use like abc.pkl python phc/run_hydra.py learning=im_mcp_big exp_name=phc_comp_3 env=env_im_getup_mcp robot=smpl_humanoid env.zero_out_far=False robot.real_weight_porpotion_boxes=False env.num_prim=3 env.motion_file=sample_data/abc.pkl env.models=['output/HumanoidIm/phc_3/Humanoid.pth'] env.num_envs=1 headless=False epoch=-1 test=True and could you please send me your comand and motion file ,just as you shown in the gif
|
Hello author, thank you for your excellent work.
I have some questions about the inference stage. In some of my previous experiences, such as Deepmimic and AMP methods, a separate model is trained for each motion. During inference, only need to specify the init state, and the rest is left to the model itself. In PHC, the model learns a lot of motions. In my understanding, the inference stage should only specify the first frame action, and let the model infer the rest.
After I ran
python phc/run_hydra.py learning=im_mcp_big exp_name=phc_comp_3 env=env_im_getup_mcp robot=smpl_humanoid env.zero_out_far=False robot.real_weight_porpotion_boxes=False env.num_prim=3 env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_3/Humanoid.pth'] env.num_envs=1 headless=False epoch=-1 test=True
, and I replaced it with my own motion file.
In the following video, the input action is a walking motion with sliding. I observed that the model inference tends to fit the input motion frame by frame, rather than being able to correct the sliding problem.
So I am curious about how inference is done in PHC, and why it can imitate such a motion frame by frame?
Looking forward to your reply!
phc_comp_kp_2-2024-11-06-14.05.53.mp4
The text was updated successfully, but these errors were encountered: