Best Setting for LoRA face training #2608
Unanswered
samuelkurt
asked this question in
Q&A
Replies: 1 comment
-
To achieve realistic LoRA training with 5GB VRAM and ensure your face appears accurately in generated images, use the standard LoRA type as a starting point . Set the train batch size to 1 and use 3 epochs to avoid overfitting (Train batch size, Epoch). Enable fp16 mixed precision to save VRAM (Mixed precision), and set the learning rate to 0.0001 with a cosine scheduler (Learning rate). Enable flip augmentation for symmetry (Flip augmentation), and set Min SNR Gamma to 5 for stability (Min SNR gamma). Ensure your dataset includes diverse images of your face with various backgrounds. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to make a LoRA the basically puts my Face on the Generated Person, but everytime I try to Train, the Outputs never look like me, or there is no Background. So what Settings and Models should I use for Realism LoRA Training with 5Gb of VRam?
Beta Was this translation helpful? Give feedback.
All reactions