Replies: 1 comment
-
Sorry for the duplicate |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I got it to work once with another config, but the results were worse than the old embedding I trained on SD 1.5, so I was trying to tweak anything, really, to get better results. I think bucketing might be the one thing I haven't tried changing back to the first time when it worked. Assuming I go back to my cropped 512x512 training images, how can I get good results without hitting memory problems? I've tried this with and without regularization images and it seemed to be better without.
accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --pretrained_model_name_or_path="/home/joe/projects/stable-diffusion-webui/models/Stable-diffusion/sdXL_v10VAEFix.safetensors" --train_data_dir="/home/joe/Pictures/AI/LORA-train/Me/img" --resolution="768,768" --output_dir="/home/joe/Pictures/AI/LORA-train/Me/model" --logging_dir="/home/joe/Pictures/AI/LORA-train/Me/log" --network_alpha="1" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=0.0004 --unet_lr=0.0004 --network_dim=64 --output_name="myselfmodel" --lr_scheduler_num_cycles="20" --no_half_vae --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="22400" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --mem_eff_attn --gradient_checkpointing --sdpa --bucket_no_upscale --noise_offset=0.0 --sample_sampler=dpm_2 --sample_prompts="/home/joe/Pictures/AI/LORA-train/Me/model/sample/prompt.txt" --sample_every_n_epochs="1"
Beta Was this translation helpful? Give feedback.
All reactions