Please help to find best "Dreambooth Lora" parameters for train SD model with 15 images of face 🙏 #958
hosein-moayedi
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everybody! 😇
I am working on find best parameters for training my stable diffusion model with my face in different ways with 16 VRAM.
After about 2.5 months, finally I found best parameters for train model using by Dreambooth this version (https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) but the main problem is size of final result that is same as a base model (about 2.5 Gig) and that is very huge. 😕
Now I am working on Kohya Dreambooth Lora training (https://github.com/bmaltais/kohya_ss) and finding best parameters for that.
I watch these tutorials and try that parameters but the version of kohya_ss that used in these videos is for about 3 or 4 months ago and when I using from that parameters that mentioned in videos, that has not good results. 😔
is there any guys for help me for find best parameters for train my SD model with just 13 images of my face?
tutorials:
https://www.youtube.com/watch?v=k5imq01uvUY&t=1860s
https://www.youtube.com/watch?v=70H03cv57-o
https://www.youtube.com/watch?v=TpuDOsuKIBo
https://www.youtube.com/watch?v=3uzCNrQao3o
This is my last Dreambooth Lora params that I am getting better result but that is not enought!
Note: (Max Training Step is 1500)
{
"pretrained_model_name_or_path": "/home/hoseinmoayedi98/perfect-ai/base_model/deliberate_v2.ckpt",
"v2": false,
"v_parameterization": false,
"logging_dir": "/home/hoseinmoayedi98/perfect-ai/training_data/log",
"train_data_dir": "/home/hoseinmoayedi98/perfect-ai/training_data/img",
"reg_data_dir": "",
"output_dir": "/home/hoseinmoayedi98/perfect-ai/stable-diffusion-webui/models/Lora/",
"max_resolution": "512,512",
"learning_rate": 1e-05,
"lr_scheduler": "constant",
"lr_warmup": 0,
"train_batch_size": 1,
"epoch": 1,
"save_every_n_epochs": 1,
"mixed_precision": "fp16",
"save_precision": "fp16",
"seed": "1234",
"num_cpu_threads_per_process": 4,
"cache_latents": true,
"cache_latents_to_disk": false,
"caption_extension": "",
"enable_bucket": false,
"gradient_checkpointing": false,
"full_fp16": false,
"no_token_padding": false,
"stop_text_encoder_training": 0,
"xformers": false,
"save_model_as": "safetensors",
"shuffle_caption": false,
"save_state": false,
"resume": "",
"prior_loss_weight": 0.1,
"text_encoder_lr": 1e-05,
"unet_lr": 1e-05,
"network_dim": 128,
"lora_network_weights": "",
"dim_from_weights": false,
"color_aug": false,
"flip_aug": false,
"clip_skip": 2,
"gradient_accumulation_steps": 1.0,
"mem_eff_attn": false,
"output_name": "trained_lora_v2.0.7",
"model_list": "custom",
"max_token_length": "75",
"max_train_epochs": "",
"max_data_loader_n_workers": "1",
"network_alpha": 128,
"training_comment": "",
"keep_tokens": "0",
"lr_scheduler_num_cycles": "",
"lr_scheduler_power": "",
"persistent_data_loader_workers": false,
"bucket_no_upscale": true,
"random_crop": false,
"bucket_reso_steps": 64.0,
"caption_dropout_every_n_epochs": 0.0,
"caption_dropout_rate": 0,
"optimizer": "AdamW8bit",
"optimizer_args": "",
"noise_offset_type": "Original",
"noise_offset": 0,
"adaptive_noise_scale": 0,
"multires_noise_iterations": 0,
"multires_noise_discount": 0,
"LoRA_type": "Standard",
"conv_dim": 1,
"conv_alpha": 1,
"sample_every_n_steps": 0,
"sample_every_n_epochs": 0,
"sample_sampler": "euler_a",
"sample_prompts": "",
"additional_parameters": "",
"vae_batch_size": 0,
"min_snr_gamma": 0,
"down_lr_weight": "",
"mid_lr_weight": "",
"up_lr_weight": "",
"block_lr_zero_threshold": "",
"block_dims": "",
"block_alphas": "",
"conv_dims": "",
"conv_alphas": "",
"weighted_captions": false,
"unit": 1,
"save_every_n_steps": 0,
"save_last_n_steps": 0,
"save_last_n_steps_state": 0,
"use_wandb": false,
"wandb_api_key": "",
"scale_v_pred_loss_like_noise_pred": false,
"scale_weight_norms": 0,
"network_dropout": 0,
"rank_dropout": 0,
"module_dropout": 0
}
Beta Was this translation helpful? Give feedback.
All reactions