-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ambiguous 'keypoints' field in data_dictionary for training #31
Comments
@neeek2303
|
Reading the EMOPortraits paper, the data preparation implementation details mentions it follows the same protocols established for MegaPortraits. This MegaPortraits paper mentions the use of Adrian Bulat's work as a keypoint detector, which seems to detect 3D facial keypoints. |
Thank you for your response! I had also guessed that the face-alignment library (https://github.com/1adrianb/face-alignment) could be used to extract keypoints. After trying it out, I found that the data format matches, and the model training converges properly. |
Hi @neeek2303, question about the
keypoints
field in the items loaded from the pickle files during training, specifically the ones from thedatasets/extrime_faces_pairs.py
,datasets/voxceleb2hq_pairs.py
, anddatasets/mead_faces_pairs.py
dataset scripts. When generating these 'keypoints' during preprocessing, are these the pitch, yaw, and roll values calculated from the https://github.com/hhj1897/face_detection github repo, or is the field representative of another set of points and calculated by some other means?The text was updated successfully, but these errors were encountered: