Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Training] Add
datasets
version of LCM LoRA SDXL (huggingface#5778)
* add: script to train lcm lora for sdxl with 🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by: Suraj Patil <[email protected]> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by: Younes Belkada <[email protected]> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (huggingface#5911) * Add support for IPAdapterFull Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Fix a bug in `add_noise` function (huggingface#6085) * fix * copies --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (huggingface#6100) add widget * [Advanced Training Script] Fix pipe example (huggingface#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (huggingface#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by: Sayak Paul <[email protected]> * IP adapter support for most pipelines (huggingface#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from huggingface#6145 Co-authored-by: dg845 <[email protected]> * [Peft] fix saving / loading when unet is not "unet" (huggingface#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by: Sayak Paul <[email protected]> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by: Sayak Paul <[email protected]> * [Wuerstchen] fix fp16 training and correct lora args (huggingface#6245) fix fp16 training Co-authored-by: Sayak Paul <[email protected]> * [docs] fix: animatediff docs (huggingface#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (huggingface#6046)" This reverts commit 4c7e983. * Revert "[Wuerstchen] fix fp16 training and correct lora args (huggingface#6245)" This reverts commit 0bb9cf0. * Revert "[docs] fix: animatediff docs (huggingface#6339)" This reverts commit 11659a6. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by: Suraj Patil <[email protected]> Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Fabio Rigano <[email protected]> Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: apolinário <[email protected]> Co-authored-by: Charchit Sharma <[email protected]> Co-authored-by: Aryan V S <[email protected]> Co-authored-by: dg845 <[email protected]> Co-authored-by: Kashif Rasul <[email protected]>
- Loading branch information