From 3e29eca2484943e1eb3cc9dd1887685bd1535a97 Mon Sep 17 00:00:00 2001 From: Pbihao <1435343052@qq.com> Date: Mon, 19 Aug 2024 20:07:02 +0800 Subject: [PATCH 1/4] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 015c005..04a8a94 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ -## [📝 Project Page](https://pbihao.github.io/projects/controlnext/index.html) | [📚 Paper](https://arxiv.org/abs/2408.06070) | [🗂️ Demo](https://huggingface.co/spaces/Eugeoter/ControlNeXt) +## [📝 Project Page](https://pbihao.github.io/projects/controlnext/index.html) | [📚 Paper](https://arxiv.org/abs/2408.06070) | [🗂️ Demo (SDXL)](https://huggingface.co/spaces/Eugeoter/ControlNeXt) **ControlNeXt** is our official implementation for controllable generation, supporting both images and videos while incorporating diverse forms of control information. In this project, we propose a new method that reduces trainable parameters by up to 90% compared with ControlNet, achieving faster convergence and outstanding efficiency. This method can be directly combined with other LoRA techniques to alter style and ensure more stable generation. Please refer to the examples for more details. @@ -120,4 +120,4 @@ If you can't load the videos, you can also directly download them from [here](Co journal={arXiv preprint arXiv:2408.06070}, year={2024} } -``` \ No newline at end of file +``` From a9172900742f78f7b6912ff62b5bece28eb3b741 Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Tue, 20 Aug 2024 00:50:11 +0900 Subject: [PATCH 2/4] docs: update experiences.md Traing -> Training --- experiences.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/experiences.md b/experiences.md index d0aaacd..f77b255 100644 --- a/experiences.md +++ b/experiences.md @@ -45,7 +45,7 @@ However, if this value is set too high, the control may become overly strong and So you can adjust it to get a good result. In our experiences, for the dense controls such as super-resolution or depth, we need to set it as `1`. -### 6. Traing parameters +### 6. Training parameters One of the most important findings is that directly training the base model yields better performance compared to methods like LoRA, Adapter, and others.Even when we train the base model, we only select a small subset of the pre-trained parameters. You can also adaptively adjust the number of selected parameters. For example, with high-quality data, having more trainable parameters can improve performance. However, this is a trade-off, and regardless of the approach, directly training the base model often yields the best results. @@ -58,4 +58,4 @@ One of the most important findings is that directly training the base model yiel journal={arXiv preprint arXiv:2408.06070}, year={2024} } -``` \ No newline at end of file +``` From 8ac4b73317919a3d39671487176ee526433a395d Mon Sep 17 00:00:00 2001 From: Euge <1507064225@qq.com> Date: Tue, 20 Aug 2024 11:14:05 +0800 Subject: [PATCH 3/4] support training for sdxl --- ControlNeXt-SDXL-Training/README.md | 55 + .../examples/vidit_depth/condition_0.png | Bin 0 -> 153513 bytes .../examples/vidit_depth/train.sh | 19 + .../models/controlnet.py | 495 ++++++ ControlNeXt-SDXL-Training/models/unet.py | 1387 ++++++++++++++++ .../pipeline/pipeline_controlnext.py | 1378 ++++++++++++++++ ControlNeXt-SDXL-Training/requirements.txt | 11 + .../train_controlnext.py | 1449 +++++++++++++++++ ControlNeXt-SDXL-Training/utils/preprocess.py | 38 + ControlNeXt-SDXL-Training/utils/tools.py | 151 ++ ControlNeXt-SDXL-Training/utils/utils.py | 225 +++ ControlNeXt-SDXL/README.md | 63 +- .../anime_canny/{script.sh => run.sh} | 0 .../{script_pp.sh => run_with_pp.sh} | 0 .../vidit_depth/{script.sh => run.sh} | 0 ControlNeXt-SDXL/models/controlnet.py | 51 +- ControlNeXt-SDXL/models/unet.py | 6 +- .../pipeline/pipeline_controlnext.py | 1 - ControlNeXt-SDXL/run_controlnext.py | 6 + ControlNeXt-SDXL/utils/tools.py | 65 +- 20 files changed, 5337 insertions(+), 63 deletions(-) create mode 100644 ControlNeXt-SDXL-Training/README.md create mode 100644 ControlNeXt-SDXL-Training/examples/vidit_depth/condition_0.png create mode 100644 ControlNeXt-SDXL-Training/examples/vidit_depth/train.sh create mode 100644 ControlNeXt-SDXL-Training/models/controlnet.py create mode 100644 ControlNeXt-SDXL-Training/models/unet.py create mode 100644 ControlNeXt-SDXL-Training/pipeline/pipeline_controlnext.py create mode 100644 ControlNeXt-SDXL-Training/requirements.txt create mode 100644 ControlNeXt-SDXL-Training/train_controlnext.py create mode 100644 ControlNeXt-SDXL-Training/utils/preprocess.py create mode 100644 ControlNeXt-SDXL-Training/utils/tools.py create mode 100644 ControlNeXt-SDXL-Training/utils/utils.py rename ControlNeXt-SDXL/examples/anime_canny/{script.sh => run.sh} (100%) rename ControlNeXt-SDXL/examples/anime_canny/{script_pp.sh => run_with_pp.sh} (100%) rename ControlNeXt-SDXL/examples/vidit_depth/{script.sh => run.sh} (100%) diff --git a/ControlNeXt-SDXL-Training/README.md b/ControlNeXt-SDXL-Training/README.md new file mode 100644 index 0000000..b138edf --- /dev/null +++ b/ControlNeXt-SDXL-Training/README.md @@ -0,0 +1,55 @@ +# 🌀 ControlNeXt-SDXL + +This is our **training** demo of ControlNeXt based on [Stable Diffusion XL](stabilityai/stable-diffusion-xl-base-1.0). + +Hardware requirement: A single GPU with at least 20GB memory. + +## Quick Start + +Clone the repository: + +```bash +git clone https://github.com/dvlab-research/ControlNeXt +cd ControlNeXt/ControlNeXt-SDXL-Training +``` + +Install the required packages: + +```bash +pip install -r requirements.txt +``` + +Run the training script: + +```bash +bash examples/vidit_depth/train.sh +``` + +The output will be saved in `train/example`. + +## Usage + +```python +accelerate launch train_controlnext.py --pretrained_model_name_or_path "stabilityai/stable-diffusion-xl-base-1.0" \ +--pretrained_vae_model_name_or_path "madebyollin/sdxl-vae-fp16-fix" \ +--variant fp16 \ +--use_safetensors \ +--output_dir "train/example" \ +--logging_dir "logs" \ +--resolution 1024 \ +--gradient_checkpointing \ +--set_grads_to_none \ +--proportion_empty_prompts 0.2 \ +--controlnet_scale_factor 1.0 \ +--mixed_precision fp16 \ +--enable_xformers_memory_efficient_attention \ +--dataset_name "Nahrawy/VIDIT-Depth-ControlNet" \ +--image_column "image" \ +--conditioning_image_column "depth_map" \ +--caption_column "caption" \ +--validation_prompt "a stone tower on a rocky island" \ +--validation_image "examples/vidit_depth/condition_0.png" +``` + +> --pretrained_model_name_or_path : pretrained base model \ +> --controlnet_scale_factor : the strength of the controlnet output. For depth, we recommend 1.0, and for canny, we recommend 0.35 \ diff --git a/ControlNeXt-SDXL-Training/examples/vidit_depth/condition_0.png b/ControlNeXt-SDXL-Training/examples/vidit_depth/condition_0.png new file mode 100644 index 0000000000000000000000000000000000000000..9b433674f19bfea18f0df7beb133053750a9652a GIT binary patch literal 153513 zcmZ6y1z3}9+&8Rb=0+TcQ~vR*^SZ({)fI{GY4C5{xIv_>B&U7j1{U}d>jvcdlMl_)(i=CR zHRaz_@IcUhwz>pW>Ulik|X zg9%`^>pYxzJx2=bLwLb#?(b@b)Kv_z5`GSZ^0o^+Qu|U9|5E*iSWCygyoaOX)C8=y zaZqwC6@x+fp|;9$mDH6eGWpfh5gco-fjasA%qA|#ap%i<4dQ$m&WS~Ay74A+h_+hT zmyv)qRLdWo-G*?}#Fj;(B>$N@|8vacuIXhZuG7j`D^$OHGcU!6vSHnWAFe{+k|}N+ z!8BUadeQ4q?k$5I4%K2JB9o7^gRja2D`+w$Kgi)rHlq)~I^Ji|Os~?nX!%|MqV-aE#EB)!5Hrd_f7!j}dud#U#V%AMzOK=Vw%PG+UlL22 znqM&SwBD_$hL_cqp~}E(^L&HTsBV1oXc=P0WY-wxBZ1;CF%zK9D;10!w$J9bd#E*C z!)%vbGhCxz0#~s4JUp(DQxjODg{?NQhLx|5R`I=1loBrJF!&rMpk!!cZ}Q|TS9CSQ ze9LPUB)q|UZDZspR-P%jFbxs)Nc zAzulTNw9$^v#X9?rBm&ic(KsPFe*_f2g}{Dtlf;tlX=Lht*7qd`*EpMuZ}fhq|_y| z%l+ePwT;=SREZxBn%J~{$D8zL5eSI`=gYQOuCL1E@*_iX2P9WNk^&aGkE;{^~D729IU60z1Qq#HjT?YnhjFJ>>T3EYRB`G zQv_j5Y6;*a`4Mb7s!S$&;15k+|DTZ>TmIx#n&?M==OVopc~F{haV~Ya_d2=Ij3__U zzFMW6FYLH&JBd@hoy;lSgWEu=<-C(6;27z}+$!31NtA>^*P$PiV&ey8_#*m|=)6Xe z<3X7F+F_cFf*8)6M@~&9CoVqtMYSI!7N(zum1l)#-eC%o`@pY6(fzvC!NAh{l*~io z06Su2QNS&S)g+kl_Z-UjvPj72=!XAVo&Vy`t&&h|{D5WdmW>^R)Fr4;!{utRl6(l@RpQ>*hx&Y#e1K*FU_L^(+%_gK2Y|WFq~S~=FMZc zo|s;v$*%XfwdR~a7Z0q~j#t%?ELWLUja)6|QI5mQ_Q%F^0)0F%y9CN;g=`4Tzw;vD z5OCC%VN%jjSon8jO4y7~>sKZsmu6>+AGO<8m%Rx?+aTSK7Bb!ri09Qic<*|kTZ%6+ z3#jJphup2ZF5ezOn6Nm!)2=d*ZYV{DZ;3h>x;(Lr!Kwcyc=f)#>kacvXD)?5OG@26+jOy%7i{#g1?Y+Kh5PS8Uzvkh(PHUKx{$ zZ#e!$cfXSG*P?C1`U@@W6@uk3iU-g2%Na0fs3EyH$lpa1L48on?-fgMqNwdq7)CVh zRQi};3cL{r2@e!PazePK(oA~uUDyLP%4!RH*TwU%2)@zGY5O1XOt80kGN@Gfi_LBhY^A}~4fPBWVhAz4$lO{@Lqy}h71|ze(TfD510yQ#zMdW1(NB5d zL)I^I94bGR!Ncr|`~N^-{4l6WBu;kC9sYkS^tcJ2X#f)C_ZCzqFHE4I!%j1j*@$MX z?Bq+@=xckvf_RgF^9@&e-^G|tje!^~>BR8xI)6;NP%GN(D)%&R*JrPHSBx0zCL<{} zmQ3a!HShR+SCZKSs(Ck(eE*e=2}2jRPgZ7xjq3`aNFncxSVE*f+KJX*gwE(8sbv`N zFmZxWT9Z6Eb+daGIpVkLz)-(d8yivXHJ9VrdNmBkjkuKS8k{Lx99!jC`86j|awQ zM>89$HDS{ z7)okg=rQM1h4oKl;s~T#JD7>?@abLjjf~0?IaHQK<#_rY6qgxE$dZxkFg<$VwR)@5 z^AeHq8}p32dG{3MyUW6kB?Bcb!^661F8n$hL2dI#?HA@W&qwm$J1q4)vB1!|dszS1 z58HgijSZM~b0%L9()%Uef0 zawaFI-ExLRHEnHSwzjr+tnF?3C+6)mgAM+o!bgYXMdkLzFHHS|1QaiftWwRqI=oR< zqyB4?|B{#cbi*|s(W|Rj=fleDt~soJ{@8n$AH`oF2n!P^dDLDrt}ya7x3t5cK|v#! zuvpDAxyG(ojW#!2VHiRqpKUu0`FEmv&tS-t^?+?dsbKwI8E+Asn08vz_Cg!j@d#C8%6i*9x0#XX5uyTVvcyeo z`5*jgO{?-OGq5+)M9R>yx5*B5#{Mq!cBSi7fvJ_a4X@(e@saOwH^1tCvq zL-ai+R5>SP!$zTz*GHhyH|RDVp`e>gs2w`srW zy;f8z0|?Kj+5H;E5k7hsCPaJ%g6%ZADc749&f7IIT0sdm(4MzMa1NkXwDrq~A%$Bt z!#6l-h=%^9Ie!u4SvW2wz@`4N%&=NvBHn_98c+Axl=ykVe3`K0@c5a4^L;9*qe+-V z&uVg_|FhBQY(WY&LL&8=Ha|Ud{8>ws*BinHYM4+&wwvqIyAdDb;w%Lu z4v0`+!%G;N8@!iKx86%S3#ru2)l9-C>Hg?p@;IE)S+T^16v_1Lf`V{Ce4P)l$r+ZKnZ{ z1t~nV#g@+*{&!Wm*Xsgi@_{}?$IB2^|2KgZIJLstMhqzHUHg~q>)~&XT6($t_ZGL4 zVq$-N-kS84QG#NV>pV(D|4tOcq~HcNpuN1Vwsn=12$`_S$ja$lSBgmd(oIKtk*`;1 zG1DxlvgPCmFqI(=LDIrJbP!#_^zN$eNuG&y_w&KleE|ml--7;gGjpz^Ur6Wi2=R=%@kE1DrZDZ?xUN#b3~|{*Zn_@ib$N z>JWGSBx|PCN%Bpz@Ec=cQXofUGQYLGRwI3BRl+A2rS|I!;NB&Y*>fA@Tkh5viOc)k z$a%lRRkWV-Mcd$gzu%wpMZCsK^-SG+)iP$0my=;uV3ybOXtmZ?rA3s4HOY!TR!mto z%ia%my7mY?_t@uZaOoyzGygXX5YNr<@e}HI@dO+6f%VVB5di3kHzZd^O^Nu52eoS- z#U>+os`vQutmT&x^5*Q_&#Ni+TYu+Q&rmON6#6gCIQvus7#J@@fSv<_o294|1goch|pJ*M4kBS3E6YH*q6yI~=tC8G4a^x0v z3k-X}M?C?J6zFm!vKQ{xrDdzPCY2t+iTmgf))9}D$VK>~A(Q7@&*#_Rj|!EaYuGnU zLp`zlg&Ldd;kMrN2Ez@^g=aHBShIhzckB<%7!MHA=CuNPF7|kPs z+G0m546z3NXD~vY(O0O}qZW=xYD<6_pTlZj5{!m}_oH&q%$S!#wG|Iy#`aygcZFCk z`g!f6Hq>J`h+T{mu=k3uA_uc{Kg4M+K(owc!3Pvn{Ryd(l2VgPUt+Q3(HZhsQPS&b zM)FwBIp(5o;&|e?0xMjct;<~o3Xo&DA$;Ca-^TGCP?z3n8OoTsUGFL6xhwY)A{y;5 zVC=t?Hsj+o;+8Llk0X58QmkjPPbbR1E{^FepWP2IwIoy=4W^{cmG~{)E$|Ni>kodb z>Iao6P6K+aNfB&Q*q8=3w(B9L6KTHrGr_*<~3+s70A zWe7vPXBxpYFXD%5HGxbh7#8TNfHR0!qrdRMD3r;=AGKOIjpT{JfL6#EhQOAS1!*#g zULT*TdcySv|3_PGlm{#tBCoD2^36CcE3c)M{Yw_7lQQsry-R%>MwdN`o$E}ySHGAAiZ=J4H1Zu5R5Ene^}@=PAdTW-=mfU+ek2qp-@t6+$2;(o9BEj)F;MPtu&5l1w*OeFlxaM(?wDB1?vC z8-^!`Z&*mxQ#|Bf_dP>N`R^?0=~G@$G-V23{GzBo-)G=q%U9tb2iNE=?)j1j5Y8qj_E=Ivt04A~S4!HH)DkXJvS| zG5pp_QUdY`fuG*7UgB3}+UOC5hmemPdNKTZF@r-gq^f^o1qrc$BiwmR_61kk5kDYD z2QwOLvxeyup!ky?N%^hhO1ZAcC!HwjtsPg5jR-XSDxhC4+kpz%?wrR&I(j~igt0R* z>3+B#Z6(MfP-bk%U9#^Xdrt#)%Z*&TvR&&DEPum~srru@dY4U5jUu2ofL=?k?4Goo zvUKE~IC|7fj6D106L6ANx9>1U?(DU`lb-UaY{!-@LZQw6jV^Zw1UxCRa8Lk`sB`b{?M77aAH)jzbdYbpd6-onrx|CMjY|v#+WUpC0I&u8kp7-&B#f(_M zMtG9xG=p)P&|!fUfGA^&v3r%pVsu??#h9@-WlmFrTm&kzWe*-?MTAIS0}9WB2Mf># zS&I*2(fyvSOX2gzJ1xuS_T}ocjv!#+RzJ8c;~yvuJUljg^FWm~C^JxaHY2pZ{!g(7 z{wsEvfI`EwVb*)m@7C`qZ1Kplq&2o2y7uw)R$PyChr$D0Go8>^u5so?B?!8LoJzY& zEefsH1rooITjh97>xNQng6s>_?9>gvhi(!8uCz%q#H}B(iMdcE#~Wh2$wU?n{Qx18 zOMkGiIxQK{V6^_@RmZE$KR+VK)rE5jSZ4W%s@-ed{T8h^fz_Rv-!cj~xi#o>6kKQG zi<)AY_xxlvlugH~p_Ra$EpGfn20Y?_>5ad2Hx?CP429w^FCe|i!*IIKjDC+5+H9wn zw;pAnzhBTZ)NcTPZ&1)&+zjlqQnI(N2+D%W6uo!yiQ}Swq}Giy=Tq5}qQ+lJo>L3b zPMH0b7zT*}r@^LgkKN+q;#5LIf|pCB_5;LEch)fIR^P$Sc~=&oRdB<>j1Nj&ss;RR zN1iJB?}VpwyWek||7({3?^K@NCiBvcW4Vo$C>ih7KWSM~?Z_L$J7Ad_9q8;e<-CB1 zTbXyILLPkFp_#Y`_KEf0Ka5$!tZsB>A?l)@&BxV!1aodPYC;i=lT5n9+F7?n3bNT# zh~^471taDU8#;Hq&^hJj(~2g?=ZeYPkPw^C5DhXKs+Top9~94LSLTg2s@v^{$dc`c zl#-REZv(eu=&t@P*0T4eG&Mp zYjQ8$?7(APR+jO95-$BJTSz~~Ub4qGVTHAKDhiLo_b71YsuAhR-TLS;D!uXaQ<58k z$3oZBT`^dT=^yj=#F<%H?OjQj@8m)fZ-3bS$~4UFQ1DU-av9Qo@IYCuM9(e9mcbOg zalwV|ZavrueX{k19&Ypyab0f&k>g9Zz{qBio#V>Qsd3;9E0~B3WOZa|Ws1V{0Lw8J z-dk>v=6SiVx;m2=pWLQj5u7Kl zMV#>I)(Xzbdc#@n^ZkSn(;zVTafkKa3w2t_keAye=$U}!_*j&0Wqxtl`&t`&2U0a1 zHS5hlj`GXX@fil>9KUD3MRMb(C!EGP4pC8BbL02L1~1rlPb>Y@Po{Q&;|R4fqo&hU zLchoKHnuFf$Kv1z%0l-ZB3)4=m|O%v^3Q9?<5iUJ$|k+nieD^uRa}Q-n!3?-aC;rP zNwD%GdtCzW;^PdzBYzY=QI?H zdGKQ?$k64=kP-&H+BqLkP_QueHF^kp1Y{4#8htde({cSkZ#+eqk9zEeOj#G4H2#=|LrhaIh$4>~g(Maa|ZO`aYkGJZXLGxD(|9ImbhN}cg->co4kVTCO)goWDQB=Zj& z{rDB@;nY@a6`hs7nQX;x)yrz<%MFs2!xt@C1Ds+p$r~my?~8fr+<@T&-w_bKz2TzvaIhw^bgT4Ba_JP! z*cLGEc3IO(crXF}KoDf#ndTpV5f36)ZQ)#&26Iw~`o7V&Q_qgkHX`chE18N4Xk})_ z>j(xfWaQNjs^z3Dqa}WrwI?u7UPO~ZM#~>>&doLWfNAaa3W1!Lwi7tKDM-ox54u^3 zpHDb=HnyS*Ga44JzIEQ+R4Ml;13sOIR}44+iKf|uq03ZHQ{VNLd@og34OhfSZIm5# zt(!ZdN;kIQSi=26NXnd?b~gB31q1~eW!PRvI`?Yr-< z#g{x~m8<9*(Kmi0Dda-e`NIC!*U&+J`LdKhY<{}erRL4(Z&Ieh2?}JB%s*}BM)&)6 zdv7_n+EWXtdPZx_RgV55#;UArXQjaoGJ4o`7+~+&sX=E$9o1%=@MMpMqW%$(eP4wu zsDp=N+U~)l%wmO*n4y#%Oy7@ta8`b-AbdzR&n8xN-fLA0a2gN}*Y+k}|xgo^r0j#=jMc7upu|n*zCig1BmKJr}6C zjuL2Ov=VOR%7YjAn|EdE+g90nbPUqX4{vu{w|jStvh|Gqr?TVv^PK8C<_5NRTKz6? zP0zwQ%f1=e(eL($r ze=wYH^d7iX60;_%?^4z4Z>Ik;toPue^}^V6|GX3HcAGS#<~<X%vdUk~4= z{L($K{&ReMJ{3&(`Op5XpWK@K2(=HJkGvBlivXlle~INS=XpX3!Eq|u*O)(^-94Q{ z?d7g7BK=l8TIr1egm2YU25bG9Gz%eiiT(-gYOB}a=Dwg!0=OdcOlS_sK+-ZT;*c9z zxE4IqPwusUda|{@dcoj5iq6o#SDFps?-2;Xe=NL7_|)1SZtd{k+LVK&d;DwoS0eCK zi+=F1FL!yKMFogjP#0B_K+ zs+E(EZrWI#y%4HHTb6PU)~0P*0`0>4IVk`&i22E3asdH_r@*=!2*w_w3jr`+Epo&7 z)dvapkXM*lP9U}M*xzZV46b|BQp8Do0a z6G-~UnqR{9e()1V>UMhP7EG4V9Og?KsrU^o><|U%&)fPopZgtg1Pp^8PskG4zl>yN z=C{lKzz;k*6h>r!ue$;VoLm@4=jI>D?HUIr{i+vKb1K~49PqnT{gte>--c@4YeO~f zDYl>-M*0K68KpoEc(v2|l_~F~B82<5{%2TP>Dl7BE0MbIN)WSgdkIbkWl#P+SQa^M zS{v&Zs-9__tDiac$J9w4$9L{XNtUY{Wqgx-!}qwm0vy^kkFH31deQm)=^^0S8Qk_s z_DSxnapN?e@e<&UU+u#koIN1P&*@z4%kg*y7P|CsnO zgp$;0;9$1L(0{Ob*Jq8>w@}2ZlcQ{4bKAceS>9mLNqj#>?@O?78xw92^mCVh%TCrx zZonbluP$Cqg&fv7R*jrTH|c_hqXI>7{twP)v*OdgW5xPm&J7lzdde`Euv!ze;K$em z6vp)N(T@rMIZy*NkO1nm+mBdlQa#Ij^gq`Ez?8pt*pZe#0_L5W|LokA!Dsk991k;! zJl-9?MtEWa;)Gi((xV`A&z{Yk1w4yiFbicBniK~u>D9pkWXnc+Q$!Oi2G@UG7}fKa zUk#m`-yT5EZ|`XMVFqU}Yhz8e?0^{nMcPWMS*C`UamQ9C{Td!L@me519G)GW9lbuZ z6e~ydeqA3pocABCAKa;q>N9RU>orAxL0(KS5P;EdlYRhpfC<32EL8IjD-+K(ji3CS`M=$|;Z-gu6bhjmi zTc9`A_bk!NuWDxNr1*)(TxI+c+p}RgQUX1f>*@#1?x$~WOL_I(qgE5*oMVk}2|nPD zX|w$${lDh_nq`9^|6~vKxCf_iQ==NXpw-l4Jbcu;hJ1ijAAw`|RC+G#Wh4#FMkbCj}11HU#;4^$_z%V;)_JYI*l4!UbSXkV~d?gPm4`otNke+Z-J$ zqL7Z&CAwcHAs^Bs0|hp*BM^K;Yf#qv*B}LmX81fCNK12SU_SiewCgnJgt$NpU3M~H z)B;X*^o-NKNv`)NN{X)cv**OIC^OBNm~P}m_SVN5)gSh_)G&bB&$@g0Si31V9L~0S zj1ITEa;AA%(qmSTZw<|ONC|bZZ-oNPiGxt!QX;T9=qSRtSyKz2HT!1|n$XxVFbFvt z-&J4#!FC@zNangN;za;(v`%ygae_MnZ_B-s&rn8cFD+y=p<5gFp8MLJwVJqC1*~p3 zHP4l=p~nt?j)&(0gTZr$K#wL45+vsoFnDs3A!;2JrLYbZG?>5ju_yQX?|F>&|3AtF zQPfVJFoCYuWXb&0FLZ)|ntYR7+F@rsjk4dh z_Ye^4SBE8I89=4U%XEmDj1bEf0p}~7Qm&M_;s4d1&MiMTh4R`ct+0W9yz3$2#1#}` z)j)hC1E|8Am1< zIsUe;=~oq}*9W(+t^=-8D0k!5-A0rv{rYK_AP<_r$Hk=FVRJo$=!n2othKkH%5Zl3 zGnHp!z(Gug-(*Xv-DX>53d!Txct_qxzY=bKP=w5g3JVSRTGn3~uVwJ)WhhFS2(7IJ z7wOgpZC;HZ>jhXkJ+q=VJ@hn1i40kg!ztlApFZAX^Ko?KM#5mB?+laz-a}2h0BV{tRjK&@g?G?dOyPLYFp`Y(TaOw-pH16{fYa@hO*NRQ{ z@=RM|2I~BA|RrhOui2`TFwIwDo&IR5pGt zzKqrh1@nS#q6JUHlag%kG*1w4YDCXIGx0#e>N2RF-IT%p*-m(`c)1JaGs1yI$DLedYbxNr6Ei%N)8co-t=*= zweD~=>2{2V)_Ugr@XF4q%d%)tLR-ZxbvLj zT$7FeV*MNIp3mAkzaHiAd9#blx|q>P@|4xDoxWk1HxWX(`HKiAlk`MItcEc(Ik`IZ z8w4En1}u@xo#l;lvCzaSv8O@=m$PG1E9AL0y5vDueoB9NzqyyyaNhRj_|~R-I1w#e zHy?E6+0E$1Z2G|T2KHXFPwKUxoh(7&M*L3%soxhC^U@=x5pxHOlCDO_t{S4PbmmGjCevPpC~XnI@7Xzx!a1qL^N+nYTdcLKWSAt-(cjKmB(^I5WCy?Jx+$QdWdqy z5h=o{T0LrRn(A?445Q5NXqnWTsUYEX{K#k}d73mQy@E~v%Hk>ETNYz0*gH&wc<2^4B~?q_)&*4Nk=`g z{-LK!$K52KjrfC~gFURva$RvxUMhY{oV#qAk-SXu-=l)VE%HlKNXikdUOv4EPjUsR zbrdvU*9$ZJP?NDTWmenhgIi*0& za80=aIaVx)3*(r=Nnf@9vg1UX=unibR>DV#6_@S|aCMmBaXw~oCM2kC)?H_T5A!=X z;EX!pCz+*&^vMx3cjN9RSRHUeCAP!h)$4-(`L@UHDhz((M_gzLJPq9@diXj`Kh^re zJR{9I-;5S6!(_#w^VQJ?>4l#1G{Nl6AtZcJ?g^>*u}a@k8GT@N3YrQJwA9)16Pn5T zS&K9iB;@Iz;5=6)&rj%mUb%*+p17r-fKA4w{P2Oetfo3!SNOYBZK&t6dfB(d%FB;P zq{qX}F4pCcs+h+dhX*6O7gI>oZ4|>yY}&y1!eFY8)0DTpiW;?!ef%(A-q<4x^CR7y zci9sBWmn&K@<{F@_u3AH%Em*q`h7?SE>8tg4wq7{RyAf1wnzp~zO+ofH>Q8Ez$?RL z{{(_f`xndbSzWTqM8(RkhFq46Qqt`U!#Kw)Iu(v}+tclJNo^kpTNx$hnweL;36GYj zFUTjZqURyzCFV0ZubDaPdvi*pmvK?8;f=h=h{Kn?^Yl*EKUx1k?<|Lq+6V3R1-J`t zzM^=}_JwL9aiYK_UHO><9sTQJy{of2^cN}Aryp6Nx1SQ=hg~kCB)0{S7yR_O=5rnw z^?FU`#_jC(9!<^?r{BHDWhU~~*UATeYemvcF%TJD8Yg*r7H zl-$uLnL8?}?zy$wRJiOX4EbZ&}j#yn#^(yzAHx+i^H$)|bQI=Wujo&>xdjKuW#U?zmFvex|D6k8T)=M7%I6s~#weJBWPfu$8?Cv_Z~K9Fu;-n^WP z^6!7P%W1D!eEjRIuYN*ZL|Prvx!NWwY^axUU$1rZz7XbOw?*Qe%LhCv`47#k zL7CFnD{_$hGM|aF_egh6D)}KeMGh~1?j%8GsvzgQqp239zS%jDvdI~ zua9U&MIf$PxX&lg7Mn)jBQ{L}j-qnjeADGH$jN(=mTcgDy+{%QR;q<;7TMnRW$QM( z0VlzSp{`r`SdMy-D8Yh*ng7o1h@66=^ialX_ree4MqC zSG_6FMkzLKt|8!97(aX)}C z0XX_-^|ddKJwZa|BpY;v(?Q`$#1qW~_Qa18krGziX645L=2gEYU z-?W`Lbk;Qm_+NU^ZKbXzhlEK8@~#ClrMWc$#cyv(!y4m^3ZSH4Q6!k<2d*D{iodiCq5p=Z+q( zYk|AHl4%>NL%NT)o^@&jbQ%j~m2sRKm;0|Bn!1%0{4gWh(DOtv9qYqTKU*laj!ZE# z+|(7*SAQ~2kA;xv8qD6P>FeE@^f|DlWYpx<&bIFA1p&~U6mqj&5rac zmM)Woq9}6;i%duaFHZWaa?3$;4Q1g(g=Ypi0!5nAxpFe#a&9H1oKSv)NgeD+y`rFT!PnlNRX{=bVe+Vkr!TDMlg)0*inKKq>riq8=FcZ{-k z;<1z~J(mo*W=~fSb#vwLAcwMd?GqIh-QA+xXSUZ&_BT8ek6AJ>okqBO8+FZ2nz^?K z!iZ0iCI>%2kK#99m(+*I*su#-pUn2dwTfMfom43Ts zT(zO*(FwLQ&#}GaPKBo;t5~#{ZuiK;U$c{*@b<-N-BHN31WK$YA<8_8H7J`E$Yj9y zCn_h8PZC0!v1LC}lAVYLGQQkmWA;%Yz_Y7S5Z0=)xyCu8>@7TeM(D@?nv+8{&nvZC zm;$Svf={k)xLQqeZ5<4t8fVb$41Sw@rbpjt*NTxXK8q8FsYo{eoe=Z=|c6y5(UP5fhPuO=dNFGj0UG|h7P9(d32!;Qz{zQ^fB`)ZEc>N`4 z{OV*|C&nRuvXIgcI2a=EcF){Zi!36eHwppYuyT<@9;M>r1WCV+uk7}YtL$#*p460B zN=KMp1atcnXZ+g1RHf{KC!0B|cGf?d%7Jom6iU~`Gg2ph>`t<3tDvL3GB!*%sHWQ!;_^VN zzq72GtK#z%9JqkL3ILD{*q9vbvjj$L26{bYv!BbC>vmkTOQjAcT(>=jbG$N_X573z z9sfZ2N4gOOA(LuUJKWG_y0Dc~1iV<>J$ zK;)Ui7-jdr)%77-(GOG-*dd&@pAxNdd)QZV2htF-g*^*q#KiBY0+xJQ56WBW+%!X< z5)hbNA^jH&1NKFeR+zb)Q4XdayF=Mw&xH$w`t)!IE$g%Tt6^~Tc?RZs>_=P$3+q2s`H-%8*3QhHC#OvR`j^<_;noA@(GR1Qu-a1i6#VTfCxcu1 z+_v~Udf53=WAGx+6*&8%;2cqLkLvrlk=NT{x@AQcMC4P{p{4+K#FIh7tQ0=o)7QGb zUnhQaz~FPS2990dloAXybjgBN{KxkMes2u6T-XoZ&`5UUCZ!CNA-3$v$Y?x@I@0k^ zblWL?inKUA94EDbHq*L1!3sB`n6-4Q@&HP#%XaK!+w{kWg{pYlCVU`;&Gg?|a+7yJD&`)1F%El0gNs~>4-*lx?pMJuK^ z5JILlOj#YFa6*VprUG@5&E8U00dDabP4ZL4visQ2 zx7d7ws`8=vD)hN7;n`cYpCZ z&RE38mddi#PPfSjrtDrOIs48t_*MWeze`Eka(*F+8b=n=q8$90_{$LkjG^(ca!SK}Gksy0~Kb^@7!Yg)OcI zMNhwnylPKRb`UKqbS&Nu#8eKpR(^YIit0!Cj@QleiBZ~DQqFb%pS^H5*oth1wB~g9 zXA#FbR2TQZfl*#$-UdDROC0U#Dn%C01hy)*WVlpL({9tMQ>RT^UMuZ*_v}1yO>3Ap z#u#yVNEJYz;Xi-O-I@uPaOnL=$io%~`G^$`e7*Qt61UOuhcC^%%ss4Q{RzA?76~8O z>pJNfbaJ)LYx(r*iIZ0DN}Zf&`}oXLNeGwxV7Oq(xcJoQ^tPAVXtV0Fk>qN%B7(ii z)2icrafT=R4Ad$@uxqe&=P+Qwy_I1qQjXWLF(R`ooD$0=8=H}Z3EwhN;;L#WnmTeITGnQdehY%+`HZ`Dp+8`YyJa(I6iyr?E?m?OgT;u?PFPp&+E-7m zC9iHo8XIZ{{(M*Qg4^Y)PLSb(HvVw2=^GdFtjAM)?W8)=qLMPz{nP)YZJ@2TNaZDK zDOZzY=I9Ws-vm3IChX)D6uMg4 zD{YT@lf7i7R2@B!huU?sbZ?5d<_)A(|7_Skaoyfy&$~heoC^K?A$6E4wNoX&6p9Ks z6Kchr)n$n3rE}o0X~ShA5@2yvciEHKLiFLIa==WfE&X_R5{?RI-GB+koA}qLY9$+389% zdedPcair)Z5^%veddJVf@J}Rc<(H5|+*0V=rAEMdC)|3|Elbc)$dH6*CEZ#hxAht2Kgmcz(z$b0; zrdp z?LKG(az)3r$xncKE>O4+m?FLVl#09Mcn5*}o*!N9DS4I?a9~|0;ZT*wX+|o;Nbx~8 zt{wMR{H|z^Hfzs3kILO2_iR-R#KpUwt~I}2ZqN?Z2D_jSZx5!Uzae$@38VRP4EeTu zEnunjG|`MHn1YQjQ)vo^&0s2mP#eyO+=rQx$en1*O`E?M?qX{%>OrOn~t`;^7A5AEpQD3h5SDDUkXTg1}e@7&0h{Vc*zdx4^k8i!+%`teiDm3 z*kfq0n&!PfY*Rh*f|$Xe%n#9Sx-ZarW`&ufjX5fqo>=ed*Sha0@lEr6r+bBaweYsR z!IrCk~H4tbKRKLR9|rh4^{ONuvN85<+0e21r!S~qqQpMd)$*M17_ zY5<}*SyS+`STnuygeJ(ydZ^C5qPQO}SUp;CmuuMb9 z=6I*x;~akj{A3JJvqaqoMy6b&T+H-0UTo^N>tf$WgEXC*-==l}u&c1D(jPHmA~R-c|-b~6oE#hv8{ z4$1C|tG4Y?t-Ocll`v)w*vj74K?kDcaMu^%5KE`HGU31Mw%m4I`oFMa{TK5q84%W$ z+~aG>Z^w2(){CA?dHH^`r=f}%G4AfJ@9Z=DlsjO2j&F18WeKSK@?^`T^)$$)|6k=N znPuq5Ntbzp{f$zVL?;1|QL!0*`AJC&Y#c}rwTXPN&tTX=3v$muQl+Ml`xtwIb|jf^_p@E}bkf|#hc`{7T0C7m@}Tb4(!)K3 zytdO3kKq~UDXOc>O-Y8^j2|erzcI4rlW`4S54~LVKMF$-D+-xm!fImR5IWNHrR5=KOmP?@9tz{VDaf;c9FfAs1s3@@2i3roA}qH?JLh88rNwkt{diY z!m>E6a|h{DjWH{Fzxq)-L3AI6>n~cI)U0_HjC#gg(-iTbBeYPJcf4&j#6GM^3PD6Z-J(r zba3Gv6of&OsE?MW=@4y|qxKANMyq$%`BOVEY1%|*x@7I?($Gl@o0`c&wwzA&THX9S z-JCq#eD5L)4FUW3H{XR-n^9$(SHpS|8At~bKQQ|z`A+mRY_7c@OYAtPEtzf~oCmedeXlWR~2*G~Lf|3>OwvRwRH{M@nZpXaT+XDg)^(%tW5dI+_@_Ce))URcJ- z7{ZE2Niai6Ef@(6YmJ9@BK(a$koTCF|C6c-4a)ogrI|>S?@I4s0|!m&%v;wFwjTL^p(kTG7{4ebS^=ULZ zmGNz1tFo{e_{C7Z>Jhm4A^4=pfhXQnjhDQl$^P0Pj20YbSXMrBAWL$OGjnWWYA$i-HA6 z)&>qPUoRb3>6okC0A7pwc zsJ|+GNrX@9P5aCrCSVRT(1_F#C1B8=yi4WXx*bPOGu6w}o}i?{&rFuvXP}P@OfslB zYuxoNfb0Ss`jtoM^yKKOv$=fm#aba}O-e|un$)q>SV_oERhjI?OYCL)H&xnKh7UGe zJqB`88&PBZMrS5762hMInlPoFSZRdkf*w< z+OLUnnquLr8%Ayvks>;gRD_a(y@A~d+lW@k|-^{MQZi3*9$I+DC ze|}&q7(UHpl=q;^3J1JRC2nJKSZP;ZO9h8e z+xnUls4XY;rYndJ_@oX)F}s78fn9|OAK;&#GgaAqW2#EhqK3;TXeqF^L0n|339o&? zRwAnpxEB{(pqwDFk^q{LpAf05-kr~Qz&b@9C$Xa2%@!#kf1<1Yyb?hZ7t&i`^avbR za47>AsbBN!Zg=)WM>}8Ec3Sa?yr}Cm+Rw-(@jrC+6hiXAM(b)q;UkoJrLbDgCR^`A z4 z&RUU^SMVg*WJJP;8$^ahW|89T5NtAVdobbAHBoe^AM4s232RA=>qOgso_zB5vwQq8 z6h!p1ni3uxKCK(K|83CRg=f0g&brX*Hyh;_RgXH3cZ)ZXV$()`LcFn2J)$oopYTSo zD?GumzuN=CQYf>9`~9eedu2;^%Ns80>!fPF5z|e456b#qtD@gkK{TD2xIKe8G0mJL zfDQwn_LK=q6-rLlWsf9c0>V`8Aqz!6DKH0HY!QsQSz6Z%BCmM@6cO|DRBeAn_M|R6 zD-9X&17;SvDUNz2D-Gp$OiWN0k;F(B#htIU<>(2Lff%WaQijjqMg`>Yd8gluo@BnM zr<&8Z`OPwVlJwH{`FS61y~{OApUq>2OOC;`n332K%4u1{FL3x5`1FgGw>}8WWx(ag z8I7UqvYYPcMDRlrxI2$n^Pz=?KMJiuW9QII?)>(xDBt`(l_YngKqp_P*!N;$ZC=nb z!ihvHcS1zs6S^g&|DF&fqNDO9h&|%}PX-k^npt`>P7&S9kL{K4)wU6MjK?0Z_7`8f zDsB?_uy^HPg=Rd$PN=@jYcq!zHXtRWaM zbEbYNhkd(gOKT<=0x+WBlmIi;HH2hr-}Si~@LA6lv^rRe3aTPNHl(Fq z_`IYz=iVkk%W2lN7L+LkH=lnZetb_f^Oe#EnJ_k1-o8vCUPq2n$AC>|j662*bZmJ` zvT=$bib}NbTwXct>^nUmiKPHiR{PpZM*u(fU&DIY@+nq4bH!D_7NrOM4LfWqSy(d< z3YcnMD|cHIo}KTWx?=WPF`jF^haP*=B?!1t{JOEH#MS@D)?2_e-GA@Hx1xZOq6iZN zX{5^`QU;wO-8Dd9#7#UV&maFIX)4i*n7)#3cllDt71}(F{ z$CkFosE4Tc^v0{Y7H-_Z(qTMPGkyI)Q=Tq@=JoAgOPV8it zA`jjCq68p}?(p(b+nv+ry?P@RDWe)yeC^B-XlCaVirb%FJ^^E2GOAJ4$GTQoV9EmM z*!@ux9^ki+jNy#=0a-$+3SIy~bJF&OuaocCHyOO|M;;K<`WBz*jq z=c^HEFOOt^cJCJ={`Y&Hd#cxHl1tg;&k|YM8;%t$v=osKYzkr_^iNe70Xuuj0IWg@ zmDq!8Qxc_*&qRj~2CvK3+~(|rXYwejIf`=PVYPqN```MY?(vb3|IHKMTbg&_-~Zux zcDvy70AakU*+b!& zumag*#P5JG;g_py zp9n>X^wls1e<1Wj-aee(J4>LFK5d}*wzOw(xhraBt!Seh^vqH@>D#7m`q8W_HAl#8 zik*|QterGskA@nJT%X^qyQ)+oD?Wc&w0yWM^|>hCg%?<;JIvsTb#W5R{-_%A>02Ey1@Gkg7ac;l@zsAu_p@(lu3b&293!+edL zuClTTFE^q!vYdDyXeT<}U8VMg83T9Ej41>Wt-kxfzu2L*?PY6EnnPhGOHmfMMmHg% z)AW8x)N5#Q&-l0<@p~^=lrqx`3c8^-x{mG}dN`+V5YMjS(BfBf7)wTLBr5p#GDa&% zM$0BE@bx+|D$(~kF(*k(Fpoq%_c`Bn4dn0XSRmxv3Ao=zEF%-ME%UQDfXrY1i@oK# zV#^5_#`HUyX~gz$;p9N59MdO^`;7eKG#+IOK&7CT5IBl&Vq$_3QBfhw4{qYjVW#|j z9g)2~xm)Gh6{n5}2znwE8$ecM4=z7=l=xUl1GUX|laANHfexoD^w4?yo5?wece-WlSSdhsrd(syXWw_XnJE20 z{9~i7``Gcy`KESUj$Af;BrS)KD-6ZzcXGMVeLXjRaRs7yICdU>@jb=xwSz(HZHbx1 zA2v3B&mrId3|~}A67cm_5#?(%gyZWxe^eqAJvQBTZ+y=Fz;N2(Ji5k-aIlreh(|ub zm#>bnRDN^*MV)Q~b51=p;fnwTLwX_zVNT;q7q`Lf*0ljKef1ANeS{r@c-HW3Mb8BW zpE{^$=*5z2N-~8^N2%X=h%f(ePSjU&W>sUpAV$?FSTu8t0$&=lpTNC|@FyclUmyEo zx>9y8Yf^S)sbSCW<0%Q-zn-{bE}e+`>JIx`Eg~i+y{pj{k6~$XcgjdeZ)7QsReTtl zj-yxOOFR=@iGY+PJQN$nuV&>H75zy}_a>;(DUmETF@;ozz&zU^-&RW7p`E%eZ z$$imRgpXK}W?ux3eSRM%JocOuKS$SIUJ70}i8t;2LwLtL@JsoL>GpMx%Sbk>Dr8*kkCe$V3+c>qDUh`XxLu_h z6``m$xjH&VefZB6%?=hJO$?u&_ia;^et4BMB#^Ok%%_~b+)Hlgq%DH6w=Aeij{6^R zoREkPQv7Vg*UPAw%iqg55+(6`3B=KBqkqq4kg;8f&we7v57Pz}z#n<5<@+gG&T?r&3S`s+1_A33I0Z)}mu5(gHtdsC7c`rH|u|@Us-F7iKSdn=mEq9?*1N zAuoXpukGyX`Sq#v+BM&ZZ#!9<-|~8j0~wK8>a;=t+5fD^YZc8!&N?g?K<9H zhHfzN5h_CE%R8sU`hr3{AE`gNue-7^qA7lT-|h^x_B4tvs?~@&PwV=_;2!O!Ml3R? zCjQYQXcUaBH5*>E)#G&Q9wG9c#B$E3t=xz7>ejxwf3?oqCP@)^gcmnrXLFM_Av$D>e^!bFM zjbShuSKTiB@*DCm@35DYlc8eHm7;pNQuLu~Jzd-7^^r)Kwl4F|E~CnK1Y{8w7)t1> zND6t13Nt&@C?zTky#Rw%Mw=%PN!_HQ8SX{Umu!@@7r_>q**~no%+5;c!g9Z zl%zEj&WSvtp!M4Kh-k*>&y6&lJjQ#LX7$4O+8snpghzk3r2Vyr_uA#9C}@HNGdtp! z7w);SuO6)ns>g$XeXA2+&3M$uMNN8?mW&JnQJ7hbcCB)UZYE`>Ya`&nj4=xI zy&YZV3AaFr-CMw?r{4G{Wp01Lv|E`t+?O)dPxQM{JQt3cJwz|zXKQu6HLG5V^_zuB zy=as4$l(MfCGqT*>&z-6uA#36{^Efrc7_MxB{;VuZ?}O?r!J(j6|1h&YoHP*%B!Mg zrtB0`Go+AtM;~GTSSD_D)2!QRS8C+{^L9?gy ztRPiVU+DqDFeiZ~ticFSk?~&mleVbvx>F)rF_(+w{|hL#SF0q;zr{;%vixb!tGo^% z*_$zeL{ht9-~lF@pdjJ%2g0fC$Xgzu$V`MX0s<*jLkNX}U{sozg8l^>9H(B- zDleQ1Rt^CiGn5a4PBjH(cEhyNq_V&3Nu*X(xf`ID*-qVQ%7!xhqxWlRJTH~0-N1>* zti(oOZSYOmx#OWBgPMuDLfgkS(&65lS%zNrdqE{0wvGQBRX?wCJ)d{-@X;@|-m~*I zI_r%Ar8c=Ej``+LWA#N?cM~F)EztLUR+%tG^Y?;j94IrUudzKMmnFk?=CwkxJ6)t! zuNaf${=m*eF}Q?3iN=>EU67G(JSD271xx1kgdOwL+c$74ogl1!{qqf9q2sG$KU`?&ynv`?Nks-uwYxPr0lRzQ1khX9DnR-@R8sN!QI`_ zLWHIoLemMLW05*S^AM(f7+he|RON+oPr6EDH+;M5ICj$2C97m}0D};h*?I3&$AoPU z=A!PhhpA{;q7Z+`i7d=ChWWm3)Wn2@NsMf+E@_xd?dn6&)w*XtEH*6V!9$V6rb`_Q zaM%Qqpzsp~zn2Tqti?X3do8CO(%eg_lTQlXn80`_(eb2r`&spCDyOKcMBhUuMADI5 zRJxW@(Va4XK#?j%fL&j*4TO)T-CQ%79EgRUE^`=84Fm$0QgT6y1az2!Uuqr06L;It zOP6g+$2bzx z@Ox%B&GraBLTXj;5RkNgnOamBGMH$B`I8Ji^{dYeGXW`fLVs`n?rr5I^>Zx8qZ~k) zeIexX+lqcv>L>s15tdfkE8b#Uq-GdAHC!bba@t%tjPIXEv9{e`QQ%F;i~jJssgL{O z>RDck=O+%*hr4m==%;MxqNBdTL{>I5yL&aL#PrkzV;{?Nxyv|B!=r|AgM!YsK<+`3 z)cIaMy&AhEyT>W1F0Nx{NqnIEs!)t_F&bY{pUq57V_5H}$iU%}CNa!0W7Mx0ON2h* z*RJBj=erW-TrUZf4Sg=EDijwP@uVY>#MCZ+A9;L}$ytX_zHOT2)Xympng_nr0!CUn zF9Sf(5{=p{I8oITq|6guIZ##Xs?hm&CFq=S8>?rgQ_9z$=U`(WL*# z1UJxEkii}IE8M7?*6t?^J#ci( zY@1g)DwbIO7bCH15Rh}U!BP_6&=08;B{SDDpuOP<)CtH{-fD6vOzcxFI1s|Wdv=uZ z+;c@iXOWAP7HT}3hWHlu)FRx6E zX2yMtfA{Q){W`xn@9KxE-q1px+ni3ZqT@S4mm`+O2voBI5JcR%rIP-3P0y3~FKk6ws>fL1EZipl6H@i#4&b5?5S{lOU$wa>eOnKv z>N9DTJYQM}6mm;u=`5h25jx*yn9+8%n6c4C<|Ij2LV&K;GDgYucNxWKJW8oT0VHLQ zRshxlKK&nW!8#lkSfO1}=CiPoGWQ`%$dAy1p`BP{;>O&8-}Pt5<_+uFvjySs)aQS@ z=`%~=`jx*lg7FB4!WCwek{X2a&4d~|%q2}=pbHLGR#Zeym|<9Rw0mJN{=!kok3j8# zdL%%H!Uu=}rphq`g#oWL0ipIo)7|=K$b>MlyYk3T+{LLBs+q#RYGSadaLL}Xl6=y9 z`H{KlEWKHEZp76o?L546^{cLdz>J89kaN!AK&9ciB(3{~*B;n(__d;=JCZKBtJ{%| zF0UPGUIupUi%pz7^V%8oOe2WwqZda<<2CZ*oKkmVS@p@7?y{>qdNYiEBeG8&17dqL zU8XHyi0@Bl-{Xzd1GD^jyM|F2xN;dYP zFlh>dO#=^A;E^bUIr#pBDuCA%qB2k9}grmRJS z5;hM>X{SDe;ioSomDN@xD)QiaBlhL7K3656(3^nyDsIYw-_qP@Qmcw|8KHOMVP?@HtFb>i+2!zmFO((uJ(bkc*KX<3AN6G|% zW*FMols}phP#O_AODQNY(n@s}snLl;fXx6k5huV7p2BL1wF&ctu=*@|y(lKf@?`%h6v zj7OmOx23hUy~Z7~9c_$By7h?Gp^Y*{B|tGvm%&1dMS9OYcuBmIc;%2Z^C)qvy9p&4LmUuD=trO~)PPqdYg*aQe1FQ4?{6~M zT>cF?5{MSEUFUB$BG>Ep_2A+gX?-X_PcsDn$z05BK} z!D@*z^bcfjMO6XzgZc4~JtGuKSsvDkJNIFvY0hOt^i@D*ly5tF9-O47ZM<@LisgRH5?cRMO;x$N6AIJa-r z4+`IoP0M0qvGPx4Ed;qKk(c_uEOaQF#2e5>99WyAz4*=`rFl>mlRn=jhMRO8_1X zqXwWr8gSHv3=o};ZIpRyH}nbrcNd`3 zj3$8At35Eo`68;d!U;OPjca_9NFz(?Uiwq_>7c0RWeKx~$re|rA zKL!0*>E~S`6*bAS?NR2A)&@6OeFHs~y^B3-)ltov-zz#C0x9luI-c{ll;bOh#1Kbj z2?;s!UtWSXOwL*&!D+^W;6(sYQy!<0T%|)+AfvKvMwo-vxl(_3fNE&u5cJe z(X~Z5RhBp$D4X9@2B*qA3^}}f&X%`<7A1>_M)t5mMoVJ#1yi;i+@Gd&SArNq_#m=a zO}CAR17zxV) zi;(}a;_;&9-ENYWy)m!aCL8_gcSTk?8oBp5SJRap#CqLmD<@Z-CsgCDAC2WmL%F;z zA5T{1f>0wJY{){IIjwYKjSVJKp_&5~FSjq3gikJ8(?nJ~68Nh(XA<9Ms&-i89!Kn^8@(l8}qtvzsxMde5!Sr^M&uHv{8AiYNH8#RPIM!F`9 zE|lod(9mdv0%|5nBR;c)IVB3_(gV1NCKj~yp^flC6-MY}?|)lsip#4I@oAG$RJGG3U9?Zedo>3L=a_q*{Xr$KJ0`aQ`XaWTosi6G7`eS8{Ia zG1Xu->Qva7dO8qdImp7hWny5e5hw{66WAUu4?J?6b)A3AQf;wN?N+KdnPnHk^$?pX zqpMbqCQ<`4t75-bg-2If4Vx44JZIl48_u0=n#%E$;W7?Jl}H4&$w_h5_1&h>TW;gg zR1o?wRnW_T!vN3+7+Pj*z(N9!S8s<4vt{a6fZ6|_VjicN!_N|;68djLmecXQ3Hf_a z4wQjd-)= zy>{+~5IOXr=Eh+v{tQU;ZMlSx0#Z`YAb2RDC7KaIt|4E|_>S{^rVRc2SJvTQZUY1N zQ&bqyM3>Q$)9R5ReEOp6J_zx1zYCHU*9-CHmFG|VqDsC_L2(_^pAUt?Or^~1ExD#t z0yz4AXyCwf7gexu+}p9u+Gd3Hi*RGRx^jD}5&b~73h{x$1+P{>a4?3e!f6xD#eQKt{7~pa32+l9yyD9|EBOc2u5hV=d}>ScqWI) z4m2c=C&oe9+h4cS5Lpv?G+ob-(72!Hj<44L>BUw6uE%*a2S!(`gm_&CQZXDMH7DWH zT&EVU`>{hLEOy^Ag{>htiz2V+v#W0AMCsYb5pdP@_!kornC10$*$+1=f&X>qAIJ%2 zyq^-K_$WmIp4Zy}CWRK!vY+{TotBTf1-%dB@4tddGqj`kjYxPBbL8LN1P~`IR`>1D zX#m&N*L$rfV`F9hPlONK?S9*nBGElnRJJbPjmRG5XiMFfjBxvT_0DH`s^!9KuVwpe zT+C^|U+E;+VEBGomgZI*;ETHKQ_5v|+@&gLm0`6R&N8usvkbAqCXb0)SGNyMSFMe1 z@8HezJ;&7v2?1?#Fdn9omzK9F(UcGXLcsSk`}xkeT8)u&^wYy&#!sLtqT}|)J>|Er$60OE#2iVuh+sa!&?1w@(|#Mx2y zQDyeW)?(0!!3UypkKGd6o8dVX+6AMMpg(u6svvmR0Q>>h0{4$g&ydpcR|< zxw%>0+T+*~ZEdAd_fClt7I34|8-&nwsY~!EkZy7U2auIr9B=iRWP&>8SW_K3Pjs~b zbv#*Il{A?g@GL--5O48^u(HA7SZj!7F(_o~5Y+0QLVnnhT4T?Vipd9I&W;e-xjv;- zJ{2iHT1g-o*E2{kPna-{)%YS1K>sgZJGA<*kDAw6A$YgzO&!mEd=I_>214MS0*8PP zA!`cclMU)QRz)%N0N@VqaKmu2l!`XrlW>IcZ3m>x81aOWX=J_pK4UYNH|6eV!-bp) z$`E#1oZH1X>^r#4IMSR^)OtSeZB2r;BOyE!caQ=PG`<8O+EXj@))mG7KH9oQQat{h zvFYf&y##u)F!(%F4}*hDN>@fU3*eg@C)i=!q~pg8N+m2q$$&A>9u=lS?-!=>i8C88%+X?*e`79F{RA6xVA4cx2qYCx zOnktzFI(#7_lG1oNKe^u$8G1o2bwy zZWS5hr{aOxR$ZB=A ze?0vx0q@FbsD!%_)jq#6zmqey|CqaRZOX18?;fc0aw_7+SNwNCqU04*Cd#&8?hk!v zZ@J1s*j-ZjoHRwVbHU2S*gZPc+B4nHQ0^`zKZ^c(oLuO08DV%#P}Y=Xa8QNho;~li zo6NiQH`-d9=eh^irH2 zNTc{nANmz1C4|s_NCt+b*66rZ$38GOlCIyIFslS$PSn$71VmL1fN-#f(urkc^w0>@ z_0{p-SYO`fEM;zmwexPZ!L{(~bJ=?H?;xP54eEK%|)~hIjQis;t6xwz3S=;aV^;Uw} zV*&$1=FWHk!H|Nkfu)Ns<}lE^Kj0;S$%^dcJ=lx?a`6AOZHP-dlE$c??Fr%8_k+b& z{cM5pR_6^aJ?~39uiI1dB!0CC23$c-UBE7@JN_-jket1+N=Ui=+5C|em-@r>KIO6z zV3TYj?Di`5-1Zw1yx+ZswSr|8m-2LPchSn)54rE6jIwt9>}IZm6ZBNMo7b<(ntuCI zjp6r(5;t#@`n5Py`1I~x2XYtd#}TJOBT9Gbe#}!iXJ< zf`O3@-{T+*cET`81vNNtUTeZ+2Z|rEB;fEdFv+=2br}ubUxR>3XG5Bm)1QZB!ilhh zFe#7?pqBLk8KdEji7F9j%lV-WAF%kBlmj=()U>?fv=R-X9tuzY*nAps-P+uXzPwDr znMu-yxJxqqp(mhkIl=N0^PORN>*VCLY*TnvmvO)pbiUq%0D6+>2HJQuMr>6-WfZh= z)s&~^9SSzB<>Naqb@iN9gZ)VMPWO(k#NFL>+1qs>53A}YKh@ZS!2i@ySZz=599tFj z81p_-rMNts5m_ngvHx&+IBqay(>PUw%iw}fmJ?U^@=pS^NpjVJ!Z&%XwES8_FG=Lv z^vMEp14kaz{Y$8M;D3{#Ms}%#Ii`v~2&|33UK{|WD(>DNB=xy6Y61Si(k_KF!56g< z=-|zEEq**NqBGvkv4`37o?W0iwrAGHL@}8XjV<3Gi6*-9Jd^H*j_Rk@iCoh6_?=Y} zRS(FXWEv;M;>Nsge4;JD9^2K~@1p&=l@kB9&Jrcg0f$?(4UshBYj!sO_Qv__TK&1L z*MU^Le;8ri^?C!dyr6uoUkVd%opxQ*B|TS6 z(%1OG)@Spr2CtJPp_a=jOSE3X)@dGC!faD|Qy&cv@6-a`?vL3arqU--ST##+j)Z z!PMeaG6nqM#RWxm5okzM^s z+?>ZAyU)4!`Tm~%Yu;!`!9PwMfLpK}-yt+-tQ_i?*Lv}A)RxoK?t2%oVAZ-pJMyqn z5IvD7EjyuR?D*vly;cQp1VRWvjyg60`PujpV14TJ0%Rb~8=v!&W{>PC(dj*E%UZ$7 znSf1tT{C&-J`Hr?3!*ec<_ohtIBT|no219Fy_5I>(Rjv#%o)1FUf13TKV+4J*li=P+BlWw6-xvfQ%vKb=M6#JX_epcRD;`>gUT( zSk%}B>spCyC%#=yXOg6v!f+70=Wr=?nTgnnih?$mBsRU+zg1kA)l0MbwClW=(sf6k zMQ%LGZ=H4CN!Aziuze`dILbNASm9LZ)ab_F@F9 z&Q{}Ur#)tnc?qRo<4bRl9RDRB)3aao53NUM<=pj^d{uq(x8?klw~#3t&u;OnI0})i zC4S4S_4ki9YHDe(_F5K_wNkn>kt7%UonK?KKB(ERxHkDI*E%A*j@7AHT2{&A5=~ni zA~@#P%jS?VuB-Cldtz%JDy>8BLdf}I&D8QEOFq3eBWF|XVJ&g^`b3Ubvy-D}3|?-=ngK*)JX`=f=7c%#u4wN#LqrmdU`2fFwx=d36TEiNSVVq8FKZC>$aU<+vO!jIcv@Yz5wq=$Lg_B z-zfnFD+Hg1B*+-97}>a@e21|jJE32SwbR0p8#>XArJ^1~lg^b+9MgpFH(VCXzd)$v zYXO)D;vlJ7j>{^jtF+whS|gX|IRaX>)fzTOulGCHZH1zK&g;FiB>7+ov`qC6-Rl+p z+5NIIR3`ET!od1ITrf;m6oWpw+V?rhEilQdQ5Wc#6T148sA%fAbtv@ zeCO*)@;(Ppnx%BwI;*Wh#k=ofc@V5pmBNF1XAM&yqIa)Xw>+2MR+^GilJl{tY0zcA zHoFUni^~sH$wZ){#*EA?SrM^0nUwF|ypwzg&hECtp%+^LsV~=OR^4WwCc|P^kHE9E_RB$$hiM3v>O!SA z*_&!<0aBxE9%gT)B)_V6_#@F2%CveBy_$wsS&2x24BixXXn~vI2s97LW8;wf2TxeR zA&RMd@CVWq(&6ci2kSUPGv6?)r`=cUJ408euG1m|l&ggNYu`kMT%ZGM9)T8(feveL zEQZ7{hE-GSKD^wUU3qG#mDjS`@S*{|TR-Et9Y=^u)q8`EK=AqUe?aB?t7cP-r$A{JT2%B7xkyQ%P#|!E6=_XJH<;isLVL-6ouO zCgrVP5_G4hIQ*L?vN5*t;PTLR_PmSR{dx-=$Zlj8Vb&0>fJp7-sSD=NY~GCbXxWIr zk0VrvYqjynsGMq}LwNAK)y~_N`o%KR&1UfB=rh}A1IdC{)7K*EBSkj<(0(|5w%2qd zf?U(d5Z^~3@obY--rcEmI2_*-JGMUAe@f!DBgt|-5|Y-M|2->GlGkCY=&F_WfYEDF z+F&{(cc$hpd7_M3Pku3g6bLz6(c4$r{%|4cvV=sClhd3(}k;!K&90ji%^fE!V8u;I9pwp5pvSOBYT*PM#;ch#N^>Qj(RLiRNV{ zNyz#fzzK+DCaBA`c@xP2ubl|(1JI)JT7U=-U`?VI7#dN%O#*8R*mb7#k4QtNncsUp z_G8@R=bch59{E!<+b%E6imc+orl0DX6>Y|bA=K~WwFBNp-<3?`>`@q`&?rAFUhiA{~XPvQtp0r@w4D>$3QZu;BW>QRK!TqnuZ zp{A%O=5~8(s?b`e=el`+_37fir$_F>;7DLZ@ecLre8J51nQ{7ozw>HQSy6V%PU+N4 zu%t1J+To*p4-r6>9^Ze#<;#ou%t!C@1GLC+9^N8OKd!-JA=&0-YzN<2)X|e09Cc!B zYN{$pzH;y1bZI(^zh73j((%c7ygfb5<LPo~cb z_kO=g{~BoM?sg9bkwG}OBqw7w~L zJ!09CdeVR7ZC5!b{BuVk#Vp%O{SUBUNs3vvT)%`)fWre7`=!GUrakS8AA$yXy%Hlq zp!4LWA3xi&8Y3QI&f`^J$a}=RtR_Uqvix&2hDu1d#u4pCGGqU{BVjt)sQ3fY!rG>qP+RFlU%9m0|$wMs5|_TRMuQGlRH)F zGG4(NN+tC2`7&-GU@G5zaloDZ{FfI(_I_mK_yjlch?!YV2CtF#2V*~jKVe}fylhK= ztVyD=@XS`2>b#_A;k_MJyXQuk-v$jB=@gRImb&AY;!8QrG0kBaZ#y!~=Cev}SR`rX znz{1wVUn6q42Od~_I^bBU&fMJ2NFe=Gp(y>GC}_;`gv7m*wJL72TEhSE@a!Ze)wWe z{F48AeP#UHiE_B*i(#ie&j#m}?fcw@rld@S=3mL4LIjy#YDtNN%OytX8Xi6N>4|Y2 zBKcM)?AOO97t5+br|W;09OT89;@4)utk&gykBW;agEO(6CYnuMB{!I(FyDb3mr74s zkmuL;r|<;j=EHAzhVHHEbFQH8>N_7y8GddJg~0@VaBw(&H(~T667Wb**JCB{<9wp1 z4b2#g21rpnLCvZGnhIDDuMqlCOS#m6amCzeyeaKP4CdKJl7c|e2k5Wj zGVO#v-6kZ}Dv>ph@c!s$03jC2igMQNr^>>YsRX;;!>fVv$0^`oP$f$>CC!xr)J%yaHdpi zl8)eJ;548of3gD0Qt%5~!+6d=-)_4BU?oO*7e4Fm;6BIonv9HL&s41o7sy0=6&G@K z&~l13y@qB0gh(v3b#+<#n6PHw)V=Kwx{Sbz7V2yE5BKe+9i-sJEcYyBqkOy{BHu)Y zuh)So2__W7#;3pXACa~P;eAj&LfK;#roqWj$gE}K+*-kBYL_TC>*eV0=B-e^P`tgT&&+_uC~f8vp7PD zu^1llS*W(ji1AKTe2DR`AKtM!ZuBAyD^z_}C^afoQ}|uVj0s_;79X|zWh|vLWK8Ue z_G(tL_PTdOAT56{=`j5`UD083toU(mR3#DAMS^rea$Gm#5U5pq#>TJ%{qJ`d_P+L(Qk~1x>F#FBZpBBVwuBQne{|v)x6o_$)Y@4pAv4@v|o2up|bogxg1sue?o-bjRTN^Rm0kTapZ4! zV-ba6A_ASblYy@JePKzVoigSbU}XJ|wCCZhb;Vr6kh+?U=t)+Zg<)Gys9KMJ zhlcOHMCpHe3(S11=qiV>YdbtQ;bNU+8A@Oxh6mRGQkjY>g%2;;{??Q4f53!yLL?lF zS-}(*n=VqT_qsfxWOyBG<}^&i>j)prQ1(#GP%lhqE1=rDVt8+G%8Uw~ucKaMrB>3{ z6G^NqWkw3ZB4QICIJE;n0-Vj)`{G!a+X!XNmuV-uVLdro+;16EX`YBhmq*NnPuo8y zN8TUr7@hL6d^h9d);S!*-MB6~^m{(Z=XCBHtZw_;PKZUG?cqUR6B_>ahi1oF_|^s{ zP|V@ja>T_AeepA{Qd!AP^!&`u9juLhdVU_g)F@~$`6pXcAAIKcpZG$phVNVcQtJtI zo$q&Ff4||w*|d$Iw055h(wkI%1Su*I3f#T+F0lHcW(FQmU9`;p@(AkVqa^>wD+ssy z<4bFuM@(mSEvK{{yQCg|2pLt$Ttp9j3>z5UkS7gh^(Dd3OhNcZUevRPT{<=ch1S!O z$GiikiIUIiRI2NY|6VyB^H(H#75r}7AD0Kg^WeOr=Tldr%294BAA{c)jQZS_Qu;Sx zp9}xXAPxUu^PvZ9S#LA>{DF)CyyNxl6`B9jUaDOe+&KtRqEwRj7=p-k&6lO{6)?T3 zwC>2;7~z^#T2TwCr7-6)L2X_>wq*T&BMoyBuH+rHxL_~rOw=FHQbvsqG1DMv?0n@T zSa<l z!kMv{cJWEbk!@KBd|7I=a8kxsMW-h8?q@2Ya*6!2p7x#ND3`5(4X>78CuMb)GZa>+ z5!YSev(j7}rK!|vp)g>Sbn(#T0sNrLgY6|xG|eP1qmSERe0qQXUhW_CncEN?m^L7?`w-@_;DAc9{wNbu%E}3#|O>k69A5zcn^t583;NKI|jEq1`l^jHfD4~`zDO~ z7^Kv}Mmul?zK?sIJoZ^?6w@(^4TX&qgyP1IabNYy>x>`5XMRaj_$=4R)0dca&VJ!Q z6`HVOdEuLA*?9KetUus^xOGoJvAC?FB0i-u*~K=yfpWDXHIS+>imGtoP~O$*pGTk6 zwr@R! zJBC2Wk)^jEeA4>49Ydh55|6e3dn9ZyQ;Zf=S?2X3NC{+cRAz7#GGC#`#t+*yfJ+8o zfx6u7^FTKENU)qVgEojoR{7fSisdAC$UIsVGR5 zNB%RLeNTy1<&+T0me!`WG->+Udd}@=XRkRDth-ISrdN#$KC^M0B&_VQB*bc#_leQ> ztroWnf{M7$f@N#Gl$iV7T0GCX3d+QeeFJB!PWR@{K2>Vk!1aq)3mqx+&wh-UTW}|* zpZ?$$skEJKyy{c-{59F_bDlDEKH)PxkIe<|kt4Bh!?wdVnw0_!y}tg~27;9^f`FBh zw%s)LeF^@8OK_8*qvvLY@)JVo*Fjg)XdV9w`PiuXyUHOo7Hl$^BAo2T-YeSP%uW_%Dib(o4IINop6h+U_or%JMS>eWemo+hX*J8E<^5%=u=iosP8ia?U3U-bn!800HZT81r$qD%cyyrH)Ijx1U|kEKON%8oWMXrwP>-m! z33Cq0$B3=<3C)3{jx;4_5PTgfc6L<4F_6J0JxfbDAOmcV0U6wp?a}Qke*|m4NZ;y9 zZ8KnGTG-#ppa&tvsUb82aVDda2g^2CRYB#4G{vsBzJ>z25sX}WwTQ{FxcII7vtxIs z^ryv?g*e3g5>!01VgNN8L+C5|f2SyZ7-Kw1g7L1H<%XWk z5U^M@n_$8{W-qD)F9r!{-FM+9eJdknb=Pj)yWf_x^X3lP4ZmH6GF=D=yOUrfo`Kut@`D*6*G3IqK2i{BtkC!qa$o=3H;orQ0`w=i_5X(ahVM&-z3Jqc zCecP~3zs$M`9SckEYzFkWx}J ztxkLv<9(b8(%Zf`AzmiSJa`>qe)jVLrY(P;3>&tjuxxB6P>Y{0ERl42Djzil@ms5KU8#N?)o6U}M4R+S3I%`D4=CeO8L|=i! zYo(s2Sn)8sD<5NZw$0LBWjJ`=>E1H2?Lu-gwRU15IEUHqp7s27?po%xE;Mo+;5g*f z)oti9yVHW1Ntkw-dp4yBPp^LwpFmXEKlnI#mnO-!41B*UON(Pd-UazYlL^LGQ(s(2 z(sgVnUFgZbwgfgIB#i#RKade^H+?C!pB&%$d;<--K*u{^XfcF;yIdX?|3G@@^XCu+ zgq6p+k7xUi=A4t&26cY;`p=62a8x{|#m$?eQsl*tV0!!PJK`GAmFR zD<2!J>U{V%f4~&K{9gRWz|s)a@!L`lZ}PvmxRSEe2vuk_eQACm=208rrG!+)g7*|D=RIUHbiQ*X0Nferu$xqTw>f3_wWV5Y}tk?@W+#l<0` ziY)f)z;X4xP5q{sq~>I?DeuHXkPtO0A0>L)qTj^hQzOckBbjcV@WPK911HlMX0TjP z%W)T6gLem7+>r_1^EFEfFHRJXY>pIlrU zxUCO@n@kfj8rJtWFf10X-nMI{88#pK3~TO}xoOY{@qv^WM4}SO2|(@wm;mzf7BL4Y zJ^>!M;~@Pf>b6Nn`OA)vul%FM$)DT%`Sp5DB{ZMjinb~2p&w~~nH#teH|MAj*QGyn2~(jnQ}R42dX$lG-C_<_`jNKzQsGAX+w4t>fd~7|GHlEs; zHFH`p?ukAZb4ofN^HTkDQKLW?xf3;kNm;Ba01MDADm)sw6%jjuy|Bci{y(Dh@umy$ ziLmtKPWDVaX427lYkyEsA5%JGtt^_CVR*9d$bJ2L?)ky5la_^x+NRUKlC2?O5AE&8 zv|NpgSL_ipKI@kxbC;FB=4o9-R=9iLT%Stj8QRwen@h)xI9WXIy@3Oa25gS}1K7eJ z^+0el#OtOpIN8Hnk@Qo6HeV`-O?mH)=F_yPLBReC;lTRNEmHE_Qsuq?KRjIpRMh*{y&@tCNJ&a}$IuOu0!nwcbV-gd zA}!qw0-|(Ecjv$rC59BxL1ZYAuJ1Rz_r5hO*A>0Wl0EzEv(G-~uMj4$qTufesXP&K zoD7=UvKxZrOw`(O3V3}wGB14$ou(ssd;$33!WQFDA?z+?d!4-_c;ZW|+RfShv%}Wy z`xcc&Xf!-^q>m$lOz)@;mU+QxSEx+~BHbw?#TlKKFy290|1pi7e& z-JQ==eA~3{Z33~jN_Do!J$zEFJ=nGEIDEHQ09hXp-hzJ;RfqzO0A1??eN2o8IZ0>; zJYdR6LMw$FEq)Nb2EAk*F8LuFdT9D2QOxqdpGX7D$p%siov*)amwyU0P(Jc?_bYkU zWafLtw^i?4ADzsFEoM7P6e3TWPUSTc^rO;jGG|b!9r|HWddZF2d6L={TN|@IkYLZO!l0yz;sKb&b{W*)1%J7GE5|V@r%CFMb+>qTz>s)E?QESG+S@ zx4Tq_1Rtm@nXT1p^j}$6{BACPfztn&f{Kod;1QJ}L1qxf#HRft zY`#ZSxZbp98#=?+tC1rEEI1kC^H4H)bG50&5i{AO%;>q4+p(%0iR!Q=H)nd8;*hmv zmM(h?M}27@_~&{zpo;mt<-sjEG*Jj&tYxj7omUP1%hgbp%x%*DFRnx2JHxQ_(hnF7 z8loPVNwu<)^dcX7-CsJ(C|OZ zP*zzem>RLr`SLjc14Y^$H82EEl8^c_+}+smvV#me7E1`4ElGamXrr|33*R&GH3^SP zit8n{1mfT^1{fAAfeqpNn8|nQYTo0iy>U^KBt*P^$4o>2AB!;-v<+gGhob*cjy8x` zCTVw~f)qI8tDTe1YVmSbsDbixRP`^;b3p8EsBD)UryN`{S+5Z^C2&!B5evCSI5y-8 zjT<$Xy89)3UR2Jmve{`Z_T25TzxXSGfAh-^mNg~w`)4Gq2Nm#{2t;eaQXq|*WW<>uvd&hY3fFm;c3$lI z99mlR4kCPPw(1$wU|F{nC$ZJEgi0IZ5EPAgr#J{{RN=-#yVjsYl$$D&a$-a)v%URT z==kzUGv0Fot7wA4Tt6txd%oS#onLU=#RZ|O_^3`&;ET4gDmw`W4Ks+B84^g2(H&<6 z^~Eh`Vx8OC(lgZr{g@b(0ai_g#LZ}o-YbXbu-guj4GhRZ?O>>S8k(k0C|?$(=wvk3 z_i?_{qq-hf$u71BMQnNZ(Ve=^mW_5}UP|5~sV8-hvA*W%z!rLj^2V>#%O^&LfZq(( z>~yee!GwYz$Ve1Z^BVRYAS#hs)&&E1Mo}qf2SEUmiDrug8}A-qOMCYqJ(;BOi3-BO z*7mP+6Y~tI8E_J>H4rHGk5eCOn#MnlHkDPwt4Ebc!?=vRM(FRSzhwLb_X0M7y_h?x z^NuwQ6ex;fc&iP(wHQ7dPej9ZLo@+1zUV)!B41pBcBvEdh#59c<~Gj=&@U38+y(^ie1co+VJ5eOO?W(s;4d z6$Bmd&Fv+*2_}8qp2-*VhwZ_S(%{~c>pN4w4`u$Gw|%54iU9FCHu`9VXiD_9;wwQx zLpXn%OHRZ9Tb1B(%W9({Gu}ukg?IQrPdo@UX89sq4=)YDg_c9Oaj@vAouJ z)mpQ7^UeM8K5zR5ijyW_bfwjVJf8W&X2N)`vFk|!_w3Xn^oP&=f~zIo{3Xd(mr{#l zE*2~KCvPvNOfSCs9D0w}OI@t^|LSbifu08a3^{oP4laJb-fQnPgT9X-0t?-#L#Qa~ z118c(gtoH1Xh{RwQ8EH@A_`oj<0vZg6D!q^iH&Ji*iG-(n??Eglb~(9jRQNEXyiO! z`**KQ)e1uSp~N~@e!;5yQm>Bfc&{ayrO)f0l%~xtQC5ASRA{&Yn9A_9Ky9KmlRkUJHVd$?~|7npzoC;W(N`i<+#P(8B+a#y=rqk<;wpc87Sl8Np zo0y61>9h7a9uGR!v#0iiCG?J1w&57Bqzp@UruPnwu;5{<*x2yhVufW?LHNd$i=Qxx zI=bSSB)|w##Y;ihpdo-Lhfx@5@lSNLKSngC3yUxSh*scC!PSAV&hh7X)aVK#sS`XO zuEJf5sw8dQ*u-AJx0pQstsq1h#!*2jgq!3DXHOC_=*Tl3`PkV^bN5zW?4cNUA!T2(~N} z1c>h-9}E!Q#aN@OSnNBz;Nh0>^Y=d#s$<3~GGaVzsE&Llt~8uj2Pe;d6zRW@s^Yy~ zf}QzE^NGWbKDl3?D(Bxs<4Z>Z;Mf!kHAj3fF4Zg1r6vt+jA0$0tsS+HYIAMyv;3SY}a&w5qGyKIHjmXl7$g&q*cZezuGMr&9VP z!(vXR=p~!h(;|C-A2zTSP3f}}n90u)?LsmA!A@mdKhSixsg&G>LP&rxle&S?G_3P< zJ_x_d>ZZ^j7vwe|nLj(FuL|+NB|eb%puiO-egeGl0efp3KIH)$m+HipcdhD{G|U7l zrq9=%lI4Vn0ox?~h;qqKhm!~!2mqD|>oK(#NXm>FXUflmCkYySG3eMC9;R^ph#JkG zP04Fug|=(7={_gPNG`EwN2KYi9XFEL0ae8Frvvv8AgP6|zXdb7|v zqpf{iF-ejQvQ_Wjyj?qurZBjFU|PfU`N)eZ17D}SdPWJrJ0tNZ>={!VUL{AB5C~ zd9YQOrs?rycOIj*YPxRx{ro#5B(-?E0?*D?uX{6sPtjzvbv?K(R4KX3T2W_$Yc^-U zT>ERtP6XwXbLDopgh2efKjppoXAPQvc;VpQ;oK+LxC%qpc#ja=EbOtX;`3m6)NKhIA~XQPIqp*2<>0ejFeloclbY z9(8x`MB}nk5K&CXkN|VWu%!*oW@3-e-c(wMRIzthHdD_YM*~TFkrY4oiuZb9uj@M} z%}ZfR^0f`xmSfmCA|v>Z6suz#0Mq+w?F!XGeW$`OwL;Rlb;{Hc$_@{gRao#uy=xss zIk!oK6<1`!L?b(;qP!A2(ZGNenQz!VtNby7mQ3q+Gt9({axK0)ANMW^rhYfv9Btd< zb1JDY&}$h-10wpSDNeZ@&XvQ&T@23qf!ltnDdrZXFR`Cq$Hlu}rXXqMtK>n6kAu6M`^vIC?mml`nJno(fe`OSUZyGdQwj zEl?+xY)apQEOV*0_YUv1XYf>zltQ!AExH5~zBpfhlNicchy|hdBx*!TXM^`7YBE)~ z(MhT0o4)F7a*%7|XC8>{Xpkd#n~&GBZ4|j=*tG*Lsy7?5@2)(tZXnCit33HEM!2X$ z*nKwVc3n~crI^d3d9l}MG-{4e-cF^i(m@xkBq`{L|6ab2(Gx3KhD^h!wn^I( zoe}0NmTnovcdO8*HQl6jqdiA%@?+PXPP7iPP9E^K?i?n#7Qj5ND<)gM_Y_rbP316R zSjp1i-U7@TOn)VLJkScg?;d~{>TNg}gA}I{#U`^7Yghw}(V4Vp@|Kx1DSgv4j=L^$ zaL_5X$>FzkS@a89Z13LDxawVKy(3pEkc}1ha(3_1ulag+wFO~w_uI+0BJI*NW7D_B zY=c6kG0rVvU2EDG3rCrCu67c$f=v*u5V8M5p3?QO*(xgxNg0aj>B38RziJ^lD?)TdQ#RmNGgC`}?J zjv8^|uxxF=PAC9cA||7sWj!k4tziPoda5dUr}Hg-Q>B7vn)>I7307!aT<-`VsY7)p zG!faeh7F{sAUQDPAZQg74JQn3&Gt`)r_7sL9qJNIDXcpYB|s8VKn;w76K^%H9>s&5 zqMB%LwRaKmbwAl>(z90$*J8%riD8g=;XRJ1Mr{3jw z+@2z*bH38LlPpKdRiD2ym317o+Oc)?^V?|0@qmXH0?7<*e1RxC0wpgg51H^|{kl*> z29m_(%Y?ynx3F@IgG`Lh))Q{9dh=E!c-Pi%%eHfW{rYI)up*|&QUHMeuHI}JbNU6yXI=#&XYxz^11%;C<_wtMtn?q|=_ z=;)iqW{RrI@>`j4I6XCFxy=UnBVJsG@^+ms@N&t#_8)H7zZ~D#AD?t7fYkvWS7}D`Um{j>xM=kT?(72)_ zAZ<{7Z1PRBC}P?UzTT_fL?Pd`uILxE+1!0{kGI|1)tyVg^Gy%mdn71O-*5hsxch!Q ze_^J~{!R74+*9^%>y*`NlssD36O0>pt}pEt$C}dALxkncEKuHNpYeo}O6g;4S!+>m zL-Y8ohw1X^JOFVOJJaQ06YpLeOmtsa0ayk9I67i*DlJra5DgZ7*tCe^cD2*vgl?}b za6V_q5mefO3kgsXQzJ9X&z%KV)`fkEj?tEFFEmo{@DOc#Cv? zU(q=N?X!G0SYG&H<6seReXjz}kH_h15L+=ZGrxo1FzCAyH|}Qmsz12dXa)|z#vKE| zcCqeyRpQsN)9Cm59C^?d3NJfb+k&7tIkE2pKUo5}hIjh7J()kDM^tN7C{wua8l2fg%q1if-V+V9u7R5qU!vE zTErBZ??<}ar*4k%4=Z;s*XA;WoPh2Eo8v%$l@i`v7Ps-h)o++)s<%@~NvB^vR?Q)q zP*8H5*uWL;W3HWAHl193OwOEnP1O*o34yl2X=&2XW`hk&J-@1rKwTC&&|&c^lF?xB z1q4<@mV$=@B!N(39TE+oXvuP0Jp(vG1BmVjE4$>Y0&Mn3s+zL!q-4%hJwyNU;p4!o z?jsvSg^!aI;%lIt#zUZ!Y(~x%LKou4CGg*(H5>JAWOnlvpy5OC z=5*E@rSAk+p4c?R!Z{wDd|r<6EHPC=IUjS2bO`H1#RIH6>>H(|r?QM$(FdkU@o^jO z3y1xqhRxK2ftFw+409eJNqr|dRHs?;64~6jt=qZZaX!L()6RP_-)dmiVFiN{ET%<} z^mh&GeRVQLxg#5H*{ycY^>`cU?|0fac-1`B?GGMXIJ|nkAR+Co(|$aE{`WPM!1bfB z4k^JS0+4J0xWLB21gMkKUx0M3#^-hI*E{c>Eu_$b*2DYehx=;aa31fHdAkfx#bo}R z{N_f8_fePkWw+Do$=);fw&%~5=Zk#CXvnB+qu$;X8XF@0FY&2^3CUdyD>MitK!9Qm z17v*7Jifo_vC>A@AsUBN3~vQ0e!zX$V4AK^8I2XE@=~EqCg)}a1a6^SN6XF$k(2#@ z7wUZE&~J{gkVG6^vGhJzQyH~1udRX zY9at&#I&jk1G{rZFTW_3vH%>SYLm*s9V)dulS?uUbQ^?}m;#r4l>WO~4acI5b`hqt zC#{`L7NXzvdcV{jw`n3CKEM{o-2f2y%=S<>IRFD;3P(C6GglKt4L>u5+gtMyBdF?H zv*9V3)FHjUZPTSj}oUU4tipxCPwFbf$J`H4xdPaL0{d$!KcAf9~gIrH+ z=2K3Jxn^MFSq9F+UssZKh9kdTxYmqI*K=Me_g1?|mdX*(eiLI34D;K%*6|AG&MTgBp) zn|FQp-=e{wX82xPRrmgedk{QQCdj*ecfR)bu;ORMSb;{G``D5HLm&EkdzWmXZ(|yj z!1|&9cGl?Ez4c>w?_$s4W`A#GB_A3nI=czTc|-l+-!rcpF@yBI&XgO6DhnUve%e$< z^cOoDW}eUJb_VD=`Ydu6IZOV!uz5o4yL}k{&w+J}1b_fNdac0Fj!MarojwUPV|=Rl z>fnw%Rv6`#SXp5N;J=Ao$&0V&A<0x|UMXH)~jTh@oJ4D?x#GB-vs^T$c z4nH6>jrAZW=%eGyco0b_Q4cz~dxT~CWftvzM`|aB6T)DDWcp-<6p!+@{{%G1#IN1N zbC+KnYuFN!Hj9o>S0!x;*u_WAk$fU?R&V;3DgPneJ*%D`U-8VCN&XqR}Z@(oy|-Mm5$!?{_#BIL7kN-CNg$ena8* z#uGt*3>OO;14fNJ`JgZ5DMv9vY=K^?pHIS0#O3iyQl!uh6(gc+`TI z_*nsTu|oP_gtPfm#-fc+!@D)ve(C{~eQU+lg>(19g`R8ikhutFDK<6u*eOM~bH zU6{CCjSRc}yOLbp^4Q7qfs-fRoQKxU$HRl67@MI$lyKFDw0OAYWPt^;q=;Wdr7R>V zAlIXJ5Ia)LI$xH+(zh?WL{(M6vE+21W~KdnkoRC>GH55u+rmc)@74l&3XX>{^J`)a z%f`m{@*5X}>Z+fuAiQ=^iyxDvs)O(OXo5r)+Rt7ncS{2BLDu-_RTDAV3$?2r++l0 z_un1-@Pbw@(hT^P?5OrjZ^$8?PzH-LD}k^kBj3fVHYuU@)AgU7&G6l}P8qP=83Yh` zE}4F4I^l(AR&+gt@TlgD-%}&SBn2O!ys13`1EG<4&(W4wCiayP&Kh|a=bNwB(+axw zHm{2fklq^2S4NXXaMheLxZtsqK)nIz<||g)y?r-Z5t~zgco)J&R)x*|=u9G*lE-oT z-gqSHA(HNaAQL^#zyJn+oh)!tKxGc8C9vcnCIvO0%+#X}=MgrQ?;CKdTA6}?m_mpz zBRU$b1|^|5+N1C#pDMZ3)sv5IDYSHSqM@IWSYxyRq*}VYv9yGc_Dxf1V zj!q_z4k0%KY#0~+MKxt4o&LG8eGpu|_zr>zqJbVnZMw(eIAAb_h9_l7sT)Ckc`lUe zU345{r^#KisL^KuL2)L05-KP8LUPiwnn1%cWFA;41I1kNfHVI~dQ$`Tw*=`uU>T2X zIxxdjo%qion`c%Q8j`^I{iAEG?0y;1o!s!U6EGrZNaDc{^^BamBa=p>le;LV^0IhP zooDr(w^b+G>Oeyh5kO7qW0-1H16a{sPZ$cy@=-Yz&A>Sw0H)cX|NHrbYm>g>28=s^G!(Ulntq<-|IdG@y0n%eU;R7Py(ieJ>oY^GS8CoB$k z?-xn7Ivw&je6cTqf3v5AHc}Y(Vqa-nILm~A|6n%tRbk)GoV1v-@}Zah!?dcu>AJH_SSY`94q|I z%$^l=ec3CqcUe~8zglH+w!g3;zDTMhKoD9?pTj{`KSE+a`0ulO_8gt6^iK~CWrD9( z&!V^WBz*K5A$yI+tre*rc);7MUSo)izbTQxH)*EX{#VsFQr>TE`n71~%3~s-B<;7o zs3^wj7N*{YBst4+o7uCqQWdEo0YfeQWQrBLdC3V4E= z4O2wy7r}A1IIZl@?ejFcLM-4|9RU^qoCl%DbwrB?3m?Rgy!SHLTF~BSjyxw7oG>JJ zxdYBrf)`pQo_(_7(_p#vh51RtYWaz?sL?HPs2D>JGQkc=!Khx%TUUnPS{dN>KaC40 z&|jx>++Y{St-}oo!hjmZL4Pu8ZZsOt)xs#@FgPRDAfcl(qa~o~3uNL^^uA1CccLwo`&a@@-`L8c$Pt3$zdVTHyH1PSP4}0PbWQj9Q393Z`dc%i zCa(C$_z6?^8Oz7u`Ojp$d+n~lb6J7Mx(Du8-|!0jZyIT@zc((J_=7t<>Emvu;NT$w zXEX#b7RXtv0huOR>tSX3(iFn8e4IP3e-gQw|M~}%#((t_Z0lS?8tTpC;A-x5d#%a1 zrqYWZ$H+4Y$7pL$d=7Six^Cb7Q80d*3ozLBToMl_3y>mk`$R|H;eZSQ0cjMqRS2+` z0R_;oa&=vEN}s|l4)S}NqyzU3Wnecxu$z7h5-HQnNTPdKW7mG-%QS=Pq0((Bl$?l&?ET$6WHnwCFpQ&-R7uXN-c(B zCZ%B&3aU^cWA0ng2;oti#1PQR{^10X=(GOwopbD1mWiB#ptdF9u)QLDsX)q z%tnRjLZ2JILDba+7!!esga%6@sK?=abcv$YHY^Yc&?U2A9qO(_ytDoR00lowW&~9o zpwg7ZGyJxt&H^kNK;#|Xbuds+NSan3aQ(o~b1U5ji1W`cc1KiwZF{MW@<8QY*?pez z#V(ag$l2A*HOLis&!CIB?V!;T5`oHq2OHG@aBk~~Nk$#1ZikW^^m4S(29LYa2H&v5 z9VYC5i#vH50C;*@WCDVu?;*l?P%HbC&KIn#M?VADw6y)o6n0zZX@$I?3JMm2SVe@cT2hUfA6A0vRS@0qBF z3CQ<9vQbuY($qeF=}!l=dnae*tmJ@Bi^1d#y@%e#KC3&b=xRfw>)Rj?T~E|-G<9gS z90-PD_N}+Aw(QN{1ka6ITyiF&B_u=L$K@w9}l z?fJ#-4gECLu%`;(nMza?zD?Bi4J*9k%BRESlZE{?6q8{AZ+O&{rq2 zKb~GCm7!PLO9-djEOx>U!#gW;B#OE)yTm=Y*mkwiNCb8BRO7O^|rfH@PbH@fvND_UVoFxkeZeQ<@T@l?_ zFb&bh7lL9=L!c*3f$ShS>xheG@&-(1hu-i*<}d5Oyn`U)nL1N70aA;okBBZOIxg$B zh>KbKPJAW6M*E7KE;K(0hzy7`o|!(k2BVTJCgMB5YGf3il65z~dc(jLPNYzpy7tWr z>eSV0-__;23jnH2x|B3y4UbgBNY)mwLvT*dYr=!Uvb^UU(5<{3mm~-K*nnU`gc`cPbaINp(GU$s+o;*|2wqVa)v4)0E zZ85Cl;VD1+w!98>5xEbqhBQ|Ho?M^97fhU5_r3CCq%+L6aTL=*Y)%%vhJO!$RP<$M zM9GlU5VPx+XBMjQ>>63Idv$uyE%W+^jz|E%x5aX}vu%>IZCXXWj=r6qz8$kfeK~v{ zI+b@tHU;nUNSB#pT=g}{m;CNXf!T+O8pp*(W7Q++?tM&9!mX9*cmy)CqW?L3ySaSj z3j331v6R2i^2q2-Vuf)pFzM}F;xp9nr8>@+HI}ZG@nz-)F(uX?j@R=MInS23!h)qFI_5W<_^_h2naH_90%Fw% z2S?N=;{7quTsl52G+OeblD0iLIV6$U&yUL2&-C}M=wDhl|8I;Ie#oV{{4Y_I5Y=x{ zA@kzsd*52d4_JtVhQE_j6{oQzamxonE8DP(6&?Vdl<19rHD^E;LX=FcGxwWT?wKF^ zCcd4~r)l(|V$?ijp8U@3FwenCfL{Y7O<0JO!vbQ%G7Z=HLB1dp=pE4NIIvm)#mQ3N z`p(${C{T18gbC(<@FZKbKR%`EzH%Y0s)boX!0=90^C1y))2GdCXYg^+zkOcW!dTI; zK567x@Yl$`$``!SXXk_~HYz_WQX_ALrwf@~yM+jVY{F4_rGckE!8UM*z|7^o7Ho&r z{XD#&l^-d99w}740tH{+k9m`%`13vRzOq)Xm$fEJM9r(HV=#rfBpW9DoKCm;=4;t` z5GcBVW**UjsU!CiUs47L3Os#J@jGB_j$acdXjM-)qk!1xAqfg ziau~R>@de1vsPbRb|3d_UJrMB#2lY28TieZwQw56NZ64f*xN@`0f0d;BG)nZ>$3Om z_mkb+pk1yhODJ+4;oi#Kgfs$0AH)-YO)yss7uQ@nk zi?*)u3}kj%q}rOP_kZr&c6t<$U@jxLw0+y{nTE`edz`+e8V8VFzRsX4qUJ{?v zW9@PxIeJG%0zG;8XL8*1vO&%}{wOBj*Q2q+Hc&}{&8`!3XHW!B){GqXCMXhyGrG+4 z{GJh#=vncR6F`;da+L^_ERP4~2u9QuJ~uil-y>|?8)7N9eXxdo|4}O(a`EHKDQy?7 zDrwNNU$}10CDXQE}n>0rl~4ITw7vO8JUk*JBAX*U<^)x~pHBoG2I4 zg_i|CZwZfg+|!MHjy?KD7K1BM)D9#&FjeL$sTZ}{iwgN-7j`-j^~-@=Zy=6O8o8g# zr?^pOS04IG=EjXLReAZnU(Q~p|5^JP1i&KwYAW|6TUxGUVcSo=U)8lj`S;8xieva+ zAM|fZ-z0Iu_Q+Pr$M3~tku@wdyAiJuB+Fw*=6a3X0Sk8Dl=`*%wKtyjC)e+{yfRC- z7rP7>f&FT|5tKe^$9I-Ktn^-N!D}|kc~!BXm7OX zL#%5SS?4_myF564RF0&{n!EL^h>$dlMyLniy#)OIe?sO39MJEiqmPkYesHZ>|LPE=>x8KYa}&bQqurz2o@J1cp3e z3)eP%r&c(-dOAG}9u8>tffU*!(f*dMFo{UpDVmN_kq8E|09Su5XmvkkN2oXAArcF* zA&$xFBH)_P^mBrz^>iDT1p%$Fle|r7pQG^A?!P1L1&v4RU;fxG?z~uNQ+sHrYiq^l zfY}G>&BuMJMAyJNk*{9MTF?31gvnvHi&}t-3&hUZBe`1R7GAGsc6(oV!*-#?DvTTh zD!2o(v9nPe9Hm4v9G1G=#qY}XqIx;>NGQtf3QT_&k!vcFxR}G$sW5gWDg9{EIdn0} z^{gwXM!1(ZlS^x^=j-#otugp9qltq%7g97GXWusoHl89Wp25v|99mMWYNDoo7S zGh}%e0u^4(9T-580Q@3|1$)^Kg~HC{iL7gk-}C}BqpQW4{@-txowgh)1WC4R#L?;u zBeGstgti20j|u0xEBt<)ODrsWZ<;nXb3L!yi!_orhKo86bx~}JP2?;yL@Ms4x0)e+ zsdMz)9s7InFFpZ6LCr7>77b9OXt#B<$fDs)`>m{U1Us>wfttrjIU zYOMhT6#z6j$#C^cE=rD#QY+hrLtkYI_8H-m(r4M|)ak#1Zpp&imE54W6YhsZD8}O8 z;Y2hJH~P34Qn^*Xc$G++BKKS)XCh#XQZFl+LL`>F2 z{NrT#FfK8LF#u*S?RK^xls-R~P$fB8tXhVho!IgC@`+g>lGWmT#QkzT2e!Fk&}i&0 z4OYnhAJoN7<#;7}S&k@nfwcSuoxp+%_@+I~1A2{ETsrLQKJ;((ukLj2Vg_&2*LYT7 zS_)=Yq}Z6q*eq;5hlw9GH{llN(;nAq{b#POr9YH)BN0+4V-|# zhTebVyN`8Vq$DaKNiah1Sgl+)z`kz~t9r8Ykj$<6Ul)gt|1vHO-F|cbiOS0lSmED@ z%2fXNfoIK)@4>!r;nT3ab@}#TA;RUQc9ue-+{c^`&qTI>C!j)VYdEn|AFxJ!=wx=5 zPZl4sd;GE6+fOTEpY2^~pJf=Ax9SorBcH=`$5T}JooJM*F!R^Kv3$%RiB~)1iCWBn z2TE7Yr+bea@NH6+bfZGh9$M8~j`Gs?P9#M|)_S^eTJ-OPjAzIZjbCcrbm0S~bWhL& zQaZ8Br2wXQ8d9v#tbx)cqPwj&2Wbnr-g&;@hdMDsA2h{%jUlzdH zdU7PUPR!0*^%m3gFP7JV`_}WC0;0tb{Zn&J+;-2xAHY~0; z9nm0WCs3x2IsaNG(COM7>_@UtC=it40;Uedj*OyviyJxO5rrF>+jhNN^`u&c-bbi? z`l!EymUIX3vys3=j#Yj?gWvcPw^#ERTwHit(rDP8n)QC2*{}ye<;*qj%}k@kS-A2{ zlaooCKEP*-}a3i_wca0ec;^`Dd*p)@~7Au2NrH8U2^l&-x)P_oi zOUCPwUYWejdoBBy-9AUf5|09U(y)3>RCe4W>^ z=a6dr^eQzA{{|sI1|oXyl1KYHv>mg{49%8Z_t5q)E;hwojN20z1XvUk6aqNc4YI!`q)`Xqa?`}zXS)Bzb^KR9w_)!5;9 zpB1ch{Y^0cGa5bow5s~2NdC>Q@7f37AuW&miL{H9RKKyMs0_mX6uSqU-*r|bD#V4% z2-{8zI#EJK6ueRdw^)`jF`}b?yurr#04^mA-gOVy`X1M8E^f3Une8rYM%}NdTg$il zc3uFxPkn%sDKs%HzSN$9457}1P0J+`B*N83UySwWY$#)L+5b5Aoi&{jxPQt2{)<*& z(YuFQT6G*83XKoJH#Ln7@a@b7gBmUQof(dfY~WZ7{uq}@B&Sfv^%7P3NlM~1{D)IW z!^y3poWGtQ!-ZoW10B5+MhanO)>s`LfoVJ#OX*l^VB|j4L^Rj|F#*=!;s{|QuccxP zZh=nhm7?xqo?*L7Y^lTTjSlOT|Djy>`L%qd!4|5nm?+2c(2hWkg`OeG1B)I!eC{3g zCv35P`bZVAS69z`@#g|vGkyB&SF75Vix6-xL*e!S+zR{lsAUe3iup=!Zvxi)p7rs| z;zM@_f&UTkBkxP&=c=~!k6Iwq@0Q%;nc~(knX?e~PLBhtj+K6*LIpdFS;0k>W>4_3 zZ}Cu4o#wmYr`+J$jSKh+Ms!)8%jW={@hWAvej*z(<(bvb5D`8UK6Gy**Hde|sGe=v z+K^k=Aj6N=tm{&ll=t;8YEYzEgx2Y|%<0*1@KJYgak?7Z{xe>(0N74UZ2vf!y`a3+ z@QQMipR3Wt(G*fCSzVq{VzN6s4j&vmfYMy<)8@&*n1^q6+)XbghO2}BY*kdIbJ_dM zwv8`*SS-y`CA0fjm*A%sgSU6y+L?vCdf=LKMkLQtV@F!fh^r|*gdQe zAUjB-bjnd#QUZjQ#}j4U=bNxIo8bh~`=A)hgL@5XFZ6be26p)p<^sGjhco`{%C}x6 z+!UO=-k%K>{8(47O1CQ@oN>LEGJbQKo)Eu1`oPAMyZhi$Ic8gh+7xTKb@iEZS^5T zS8AvPu2yN*acF5aa&W|B43_IF&SZKLHsZwoV{#7{Lm&yRP#!&D(8cRr5jth?{n+MC zDG^EBkNBS%YTl+cwE(tRfP_KhWga{iuAcvs?c}^lM>ZVH&~68+q9DQ~#+>uaZVK=# z_?_x56CzBBqI!6sw7K*=7abVrkUkIsL7CVI3UortvWvMBdQAL|?k%<*!80tvcXTP( zPk~Cth9u@P^szHgdA&=Ne*w1mb=pO~osY~*wi!tWe%=)Zted^e;8(H4W8FTs!*NT# zNIK#BbtV8e_BrLJpx+F9Fp3=%wL(0-o$jOBUTbAa7OXQu+YbUT{=$L?Ft-TG24)dq6X z!=1Ci(UP+-Gl(7MhHdw}pr6lo4rH#*Vl2+*(==XRzOxsAMAP+~ZFiM5H2E4`umz=q zTtUAocqLo?tgoybLU{SLhC@hSe$5{j=jSx~^}II0?0X|iP7oykwDBj#2J4Q3U20qiECqJd?CRuqX(ERxa{ez>A30rB;)@co8hq> zCgIozAj81pTR#Yle0b&{8fvMrkni4gK1|xvI=W3f*>Vz+*u+ zC2Ivnej-MWQbUqhz`Kfr<)xw=9QtyD<(A``Qk73Ia9PUBIAqQDha8jZdtX^P)9u6)AS$!LaX$U3`{De?YmleiPqeGUp zce13OfOo{eYib)Xsr?6xjfLrY)N{SCh;!VZuBW1x5V1hR_X5$u<6Y46tLzM#6h-$G z=wZ1J5VU|`pkjE74FNk)*xHZ8LT`cmCRLuaF-<4H6}o$2(RTW)O0NM(VsxXYUuQJv zn+NoRk#j@*0iB`pg+VY%LgrxY@V9Xzw`*}ziR)PJ>z2>uL1UWS;5jAMdOC4od@vP; z+A#1MadoVrZJ^fWuqo76^%N2&2L>eqhY<8CqaPkPObfd8xwg5h7Bt3qlG>5>Kizn` z@z%3f)X!)>COO8cYo2;SBm=2zu-Q!e+g)n9^L16{wF>m)<~-cF;AZE(^jY0_w$I;A z2iKj&1(C+7Y2}^bzmyJjCO{cilV{KvFzatvx3<h$mn>ey0!uqy~}kox{%va=n+$6v!dscUuImnjt@i{&{+x65C{5F)E@ z7y>qw!lfVbtK0>RUF=JR@Y0V(Pj{o7K*sQ%Ud8+0Y?@r5+znDLMdaX{J4ONCyl$C5 zsJ*Dq%7lE~%^jjBAL!{7uz@z0zvTx00&pR80DkSQ!F&!4V26SH0B$SUr-n~a&(y)t ztcM;g9eOQs_~z9?wq`Yx853C4I>Po0rDk4ZuT9W#YyfVw3O;oGSLg7A2vi-|N_$U67|1UEOX_o|iuZEHTNC#_gf*haR zN}J|*FCMSLUEZttMd^biGkb_Cz)goduAr=)Paz27O5|dBl$gD^ zh&)bTeqX1_u`x0LCFc_cyDawz9}>#3JQojD&wiET)^Kdp24t&wNrLA`&uS~Q{FUNC zDMk;9n&57qjSik2q@Cq~b0RdmZl=x_!5=z(HW;^Wj=BTJ_kw<=XpKmVVBQ$pn6}f){#k46%EOhwJGE>6D*=GNs}6SHwbGUBSA#sgjO4xgc_Ljn zqYZR$LmMhm&wy0yn4@#Zyu@IKU25a)Qc;rov!OR=wHwx|3?dIuB`ik9P=o*!DSc3L zwcl-MCf;Rc-x#6G9Cq(n8W*9VC^$e7{$xfCo+skV3rv-)p?)#9+bSwhV3 zfHp>E-RL_cx|%+Z zehg`8D$bN^%H?V|J}m#pP`ti#rZk8VTh4zQQta#f^#QC+ZsV$eh=hhS(IeaHro?#RnU-4h|q%x8U$EuJ=Pje=~%$T0{o6NDHHoCduwy~ zT>g_N0c0v**LV+NAbAy+9ClOCK88RU7G|bYj(bjic6SE4_C%>XWlIfFU?B_xniu^8 z=-Ng+M%bS|LJNmS*hZoXnkI3b*~%hw!9TKamJh;@f6-~^x0($B$*?_NY10sIRn-=6 zU%l%9yL8on{qT6bE#Lp7MiV@A=AKD}6GbdovBI5!kg8yUbOct|T_!z0UK)XPQG^yp z33w)qK;0dK&^mqO<#8@3nOZ3Ot0kUhz7gwL4e0h0my?+;_fRl@ zC0S1Y*=QOf@4f@Tth?m2dH3u}%fVE}%`k7!CRlIGHaHaDa{_+=pjtg~Ja0TsJWkN7 zrENQ*()(X`nH{zJbW6?UNlLaK91mvDI$CrsEOioX9MX3E%~-t%o8{ytGoKuR0 zFFp+5y2ak9Iwi^qs#WYM>3`4)CM(ucnn_?0(e`*E$ofw_tMm=^>d{i3XHyk79XOR@ zOXpz;&LM1cY6>F3#6IbRsZ>!V0wxle5IVQ8mNh^jv+d%+fx0idn_63 zDF%&*3_esDHlZ#!{q=!{`U>>G0~@QbMbRjwgRe5@>7Qgg-s3(V0~`Yoi}eulSng}$ z8q4UdWMHAf(2ctarK?Hxk*w(1g^>{-^KAGVa{DUUP zAZ1>l$zRB*4>`kR9{GPT}{Ps`wiqgq`cHP;+;z@2l0Jk z5?h@+_!YW#48=|poVI+J@G3LEpoUh-Mf3bSSMb7ie#_N>uSD}zjo0f@fL*~RIVPBx zfIbZZSd<-+3$ernwpptNnq`-4Lc(No>jxB2S<0I zG?5=iXbCV2l=5rnt$oJ8{ucLfSd=byv^o>exoFUUdlvuSr~T+*2ceipV(e8z0fKYF z371(8Q_aISF!SBJEbiz8;5JL2bSXt1YOfbU7j*^bCx5t=RY;LojfXY+r$ zPfu0`OAdXcc*lKn9kz;JX$21bk7Mft^{w^W5Te_WcFiV!i#S&Yi~1J9bAP-IawIZ} zUvKqiz;-+L{tc}9xilVNbkyJK=9k!RmkM;jajduCAMR)YBn?^)09D}mk%*D}4oQ|& z11mwqxPkli_kzH4x2!mD>=R5MzbA%(gGp$-W|zlM@d)|R?3kOr6!|Kuv!$nBc!P2X z-Y<~6o`iVk&1EL+yneaJP+4%6zfth zX7^$gx&2HXxD$8kSkXo>hE=oy@d-dN2r~fR>`aNN=P5E4&fbvWg=SuONBaA>;8wT* zy7_aFMOwVeH||IP%nQ`*V+A$oPHbgLM)-D`0{o4-Hk?IZO>f2SqZleFY( zi>0BXp{4^*=k`Q?8cJ<3p~21YZ+oa4)HPtEtY4#}1a{z*&Kvye&s68$gDQPv?vBhf zy>1DV?`HP0@rG&1r-|Bmf%Qe}I&ilS6Ky2iCFNJWt5wM0tz?Lbh8+&yth&0!a@5;_ zz9Af!R0TvMBXPr)%DB9mqm~((@P$S6>vn{=tR(n*)Y4v56-hw5{5iHR*I3pMCilcy(^f|noAE)Q=q-~|HbpcAbPb`>W~8FXNP zx)4=z&4XpmOqnJxcz0%P_uhu>M*Z;?t^}N)dqK83+p)lc({Z$bgGLxV#Ujl7P;n)J zmuKjBj0iY^3Ab)gT#~3P#CC=NXhB#c`1v~X(L=}O;de{E5{=<*WA!ZXH$Lf@&rAx< zSA=ui^?wViB37#^m<8tBf-9A8qQr4YZ~Ippo%}b953O1Pwmy;{Z?PZGJA~%j+h$m* zzj^L-zD=bwsyF_NIWld^Z*FWn1pW5ip~6xZqBg|aZ@y=fIrR1+zBV8rcmGpi-hPiz z^)2`M*L{I@ehqD*LE-`AY&S$?+(~9JQV!ZIbcE$`RHe{Xq~^1UDmw4qhTbXR99OMDp#rH$TsFrA^C8=UNEXj}c$*d#+d zUK)qn71Y}viJ5!AaRaw%yJWHIj)Lyk=U9Q*f=P>88#v&rD3h!aA$y~$(Nqdi9{U*A zRXAaVLY5qT)V~D(QUO6C?S_({5mZ2IBmltpZ`tTXT@Lmo_RTwKun1myO)>`hCmPV| zv<#p^Nf}tv4d`c}yK>x%P1B}P{FO`+scixtP_?OT0g+ynjtax&aQS z>$hAQruPcr7@4cF*!E^ZsprjK6##w-qQS%7_^}56w_vEeaCbZb8(LSV^1h~d{agL? z{?}3o5SKHKzU=3Ib1w##U>-3^8#B3O>q+}g`BU8obbkw}lLmB9%E%5@DXH$CCY9Rf zeUmg;rjB-)kGjY6WJ)N^=^b6pvG9oUaInxZy<8Lp2^}P}PXAXnw3AoY1jU^_W$5X^jA__|BOyb%F>~GK>_;a3u=Dp zlx_{)kF200bbCibVFDrf>s#OSw+QnJ7}z{KJMxGa79`|m=!2s<51vcl%i zh(T5V-%yyuuV3G+?lj;yYgb(|q~l>x-_^Z*Ucc<=K@|IZc(T>u){D(D3}!bjul}Zg z;Jr#YW6rMcY1baC?;?+W%avG~=?;l>gFKsg#hAzW-4m&CU)IwoR#uUq{D%6zg39`a z0t-~FVlF0O$unDb)!oz3S0ynH>08`3&{`=2w(^f&uo3Vy0S`ijqU_taS5qHcb|Rep zRkW*CWf{y}@FBR>K#!y*!<6$$!VzlFGgVBGs~{r1@nhrjx~{pjtnOD;ZhZg&81dO9 zbK^WtBoEzpQ~&$gB|swz-eh3t=@?-A(zth~hrryP>ES7Y(oEBar`)|aT-(I3bWHlU zm(_XrMIa=DK86dlU;3|k-uCwFWH~b(3DcSZ4L!G6nR|)ln@N@cd4Syn#rmW)aXP&4 z1;C@vodRV!A=qjBP0c8;+WdF^Uy(BCnik52`X0rqg}y6x^OhTx4%DR`a|g3O7ncVB zaPCfZ$gGdAhaGKCZ~o%6|yhqm@XFgCk@y#ULJkwofs?6#5baz2^z~yFF zPyOhHPH*vhoQEyz-P=%^d?Uw0$tYw1npcHtxA0Zh(#ML`2N4Y-mW9_Z4Mj%B_4!QR;vQ3W zSvE9&7U(z;DI+NzRC9tZWr$I%CB^CWg8)Gfjomrnc8M_RE`wr}mezgNl=1 zRhfxI}@<{+UpEy@QHF#AJ2{@kGb)k)!Ro^Ic3V z%;}f!00F=T_pwA9IAx>YR=TuOZ|b+ z2|=c%BTS6G4mBVClaAgXGPXj89`*(}^1*3`Ug2kQoUN;7H>5Lyud}Ad%lj7j&-q2z zL`gccDXp_X`D4i%Mc}C`%SIdH6$_txK4xGN5BHAcHIjb$OptS&#}ft4c2N2R=VLf# zPtl>EL{r2h3u{IiJ-9AwA)n%rYS$y~&RlKjVF_t~9?!(81%zCeP(5hh=U4A4ug!0) z^q>wGW;Uj)c4h+x)Q{Ifw`R$YdTM`?%*Xt&ZV_%%iN5nKaje+?TOz7B2kYnWpYeJ4 zcW^*x;KrBPMVrLtf9*^|Hif2k%WiQjIOdXneK@+3IKE$UTD$zl8c+I31Bh~a*e9G*{#Vg(=52W$;@s6D82_KNr9 zqKo$sZ!#)X6OR;nxk7$6=j|+&x9n{h4IX#wiwWNldHC(p!@0q|??%5>kO1bCa{`JURbsJ9cO^$NQ~Gm(OJa)<+vvj9#c;b)I#K%2Mkw(0 zv$FOZ$nnWAfWfW333qZonA=j%S*$$Hppe{_tfi~~jhF#$H!$W%evCRLI_2!bN*tG< z7O1-#ny^Vt7p_Ce&!3^8l}`%Qm(0Pd)6Ek@q4-W!T>gS5$MXpTtyB#Djdv>~G%9+x zI-er}8wmYshInc+>YG~eX`PO};(ZTeaooP0bM|9rzrDQZ`2^#gV#Ljn-_4+PhoSJY zYkjlRYlv&PBT2|>5oW|PYiViSi-pNcr!MyTJ%(R%az&}j&zB8X40B3q%xs3zQN?9d zg*87TQ8{9GrE!Z4-2Kc!d1hvLd*8pywGz$$t{J!d^*a81X?j1`tCW^wTYpkg!jDb6 zXv|D`C~@qmShcVJz%a!rPXT#NRgVely8n-)I) zx-43J??#(N2z%SrhJu@19b3ZDKWaXs7?kmrMmjHt4`I_wo1Z#8xv-`Mm+hQuM8~M=3Zl zHV(bi*QTmLmBZbned$DT87ZgFOm82tqJh%^ZuvuJJcpK}D@k>#K>VY34B<{&L&m-Z zLcRf>hRE&ZC+{%-Y8JVp`Z>V_qn3q&$Zz;mn^PQXPrPfzGr`t5CZain=obI#EFIzA zkmth$ZZ^IhGFG-uYVaxUqOV@OD9=D<$|+2W4-av;As)CN5DUw3*C*7CDSy$f|cjQnevi&DUCjEI>67sK? z{2j084M|J$r`&j)kkabtY5pFCeG@ZEUD7jxxT}1-LHnNh?I##5a}~7cNjHqFuq=C5 z@?-8s15pHI!K9J7R227I437l($^bwknIoT4+0U0TLe;}pspt1g6C;zpPr54>ggPqr zcQx{W&eq|9yz5te-%n~(^Ze|TExBmB| zR=NkO1sYtRugq*s{F%E%J3cJLH2>00YMwJ%%0{W;=Ug$yYtH=rORrdXDj;Q&k*6@!(8?#h>Bd~?%pzH{23jt}-(|djtu>C1$`}gDB=gbb4l7^DR)MR|Vagx$(ql`=x?YWY4 za@=%>g<^A_XZS8S*oTh0n1+V>dU~JIyIjC<`NzX)0R~yKxG&vv7Pp$%zmFjw?sjUv zlKyqhu`fPSk&lGuT;NfNFWlz}(g#%~xbjic5E1v^UWoiaVc_aQ7gyv}Vk)i4CUOVG zA8_`>n%4uEeNUN!<=%?riObM3S>YIRRX|cA1-=ksM0rG$E9!M0v08I$#43Mz)SpXu zd-ba7F|Hc%qCInItdtdZt?F={6M`Qm+5nTDJian=4t>EPvor;@5}`K>3B8}6_dnhe zW_JoS4DUDTcs~-+HxyKH9gcgvkwo4LBc zzXXPQkDkOaMP;|U0mXN6IG4pa?8AZse{GTv2G1Y-(GQ(zE?4tA7CK%?c__U`iy0#n zv3n_WM0X?qx16g_5n&Y*wknNB-LvjT*N>MaxO^oCBRp@K1az%z3H$3;+I-=9?;otK zw`@%)L~sb#Lb?*^antejJtFH6&;K(ZAG)AyzyD37AwdX}{ie#!t--ggiI~dQsGD&} zZtq65`ShJkJ@up&>0zUlbhD&_!0CnyhlP0$b-u0^w;VrwZ1(rMz{C6uvR4_C_S>#G zI1$VcU8$O%i|nkCuV>jG*mLhNb;Jt7%ytdPm73Xf3&{l(Ggrtb{$s zn|j=$D!9f^BytiPH4u~&zA+-P#-#Ug$c*z&*ENpAhQI1h$`Q~yxvo07crrRdM+CJLLkd39kUo?T5yG)2O6i$rgvlJ9LT*Kv znR$~rzz^3k$U(l}Fo!9guqqJGu}J$4s&Mwy%&D0VPLUBE^H~!*l+end5Aw10zeZKz zzm0xdV8poNM@RXIrK?x)o=tdZ3&YPZ!D1JB;nz>+Og|d?Fj^BTkxshghVhEK^$J); z0zhfd2AR@?=CA(?cL2gD%)r|_?qOdR16`D zg(fZw`GkUm=N9&#{J#z0UoltPEvLtAMn$mG;v9m9mg~K8Rk4RV%XkO`+s~E%wx8rL zWY;20C}hs+LVwpKB|mX@>;$KT6)mUz{_rp6yghW>h+4>U#P5pi)kLB3f>CjofcDjF ztdw4`HW?w&LIcUaMeS_LEy_1QVfikuWp|ABFsA%)=WM{p;_|J5wE!7jOJ;d0e&tNW=?I6b=2o*kq%+Zh?t~#k&}2k7TGV3R|#(cR5$!WXuJ{#xKHoV=4~<1G>e~?w#L9k=P4t%KWB==g9z$ z-#KgjXiz1rdH59G^-MQ4CWGEU_|L35ro*vNDl;fHDApX4Dp&&e{!0;AfR%q`X^Eiu z9d~DvRY6#Z%aVZ{6L$891l-8e=pR}Ch%2E8nv&rv9fi?^AcxjXZtLy|Cz+?G;$#@) zxUm`kz03|q(%h?(eE5m$4UchR(%ThMwUm-UV02DcU?Foo3LBj;o)v$%->Q63mq7qH zUy3Vfj|uh2!Ap#$qfZxRqj`>tx|72UG!Ss4^|+6^)(f=n-aCK~067gG2X|r~DTvm< zU*@~=M@-t6BFu-1q@OvuTxx~?_nM{})$y{Ja9I}gm)94svOVuKesO{l8FSAu!&~qC z@*P>%+fL-CYlLjtvm}=l1$+$e!MsAp@Kk`9pFc);eIY<)-{D|{OCfRQWqZqm?Lzf~ z6)GSU_aRnzMddCQ^AmNwNT5ObYJoqS62gek`}9{{lUp-eGaa@%`yPFtiy$AdR}4B@ z|BWs8%(Dr37reQBVbATji#>)sy@6_D<;qtsot4*|`0L8FR9KEzeYlVqdHcWzn>#X? z!2?>_T}LGj)7q}om8#|RMOu7rKn>hEYjDGCJnPvBPvj{+x}csUwWE}^FhB^ev3-8) zTB>y^UyV(^%rb?$Iu*RDD?}@qc~~lMGM=@g&WUat6(zje(!u9&`%{~vL7{uko|{Ot ztQ_@C`@ShzC=6#gr^*%^zV#e)9;YM(k9YN(=$k9=@H{+b#63J(giTx?=K4tX`UT%< zd9(a;dSt$E@6vCe!4M(2O(Cl6{b-bt%wg5AL35>&#|wlq4odw&@1!*Kj2dFF3|hEvb<$Al%#JQkr83#-1;j8r}!c`LSV{4CYyhK({j&NSB?8h(5~Tnl|5mvuBdsPYOQf2Zrs<~tnzrN2NCB2#ZAzvi2Xc99Z7!BN56jz7++ zX7mErn^pC?&^w`Ic9mx9}Jx~a%t5=xh%$U4>81CLBx$SMy zSEa6Et6JB1v@qNIm*nVhyQM@ucQsLPyg(XqvTv!T`5{fR%C&*vMCi>f(RIhFvW9CM zEVyv(UP?L|9?Dj=qRrz>FC9Cpqg1DxnPEX*D{E$RHYXbG&Ky$u;;T0c28Qa-Tv4WiQF!mA`8WBTaM_n{%~*aiqYLWLtpL zJTbN(bv@H`^Nx)Y7~-$9WToI4C8YQ)Ed+e!sbVfFz;-m1cv(Un?jU0Az_LwgTw=xK z?O&=bTe}iVPK*-%TPL@a*)IXT=*8C;(dzFXHgg%idUQ2hy4Bg#(WHl7?Z8#$cvL4^ zmd}^gCr05J;|kM@-q*`2^=25$4mi#&De1AeEGKt|Uu$#%>GI(vqz$uXHPn+b6wvXZ zV<)2&i=bCdIHS$Fxd(pgJR(Ay%hiI#O%Ma0(-vrHAij5YFK1;i@=@`V%IQ7A%T!Yl z1bJ~n!{p0SS24uHn+(9mx`7uNa|hO&{9N=Yj1Y?`idFcHwW_T^99;K*-Zzul{0RXta9 zhj{Qz0aAuWlIRzfIky*_Z(qLp+U{>3%-cxThPLm*g2$7z&FiTxzV_H+BzF4P>%#Cq zRcMxL$sfb#A?>Vhn)q~JqTSnulmkx=g4DWhdbiFc> ztFE-_=_89vsq0{%SKA%&*e}$&)dTr^ZUeE(@s0s> zcuE=fayM*o<(1`8%R)-CRJ(JPqc1im7C-@Jo{919%XSQm=mtIe0Soz>1T) z7$y78GHt(X@Gd%RPhIt2)8l#a^HCJf{yFzdLz5EjbLru!hiM4*Og{~IsXNq&E5iT) z7gH&xKN4SLulg+M;%g!fVPlmf%%T*%V=;ND5h(;BAwuy1l9}=jo*z)M>!mUrUFq5J z+*_6F7s%Mz`*Vd~To;adRR(no|$yh(KNv%R5Tsd%x?r#)!;u(v35zu7c&`7zS`6=6gg z&zAGMb?+9iF(jLDqhc&f)Nya(MB|RWJNkzDY_#rH8*R)xTn>Snt;cYAH1Dm6AM_5k zOfe_MY;3I5`}|;+7%`HE6-|8>t7)gzo^il3NQXir6>OT5( zAVn(87hc!K{egm13t%_TdS&g=venYR!Tt`5rn$dO{nPGesw(%Zmhk;VFY0JBw?&S< zmMe)H74LIr9WUHNRK#zU+djm;GRGmG8;2dQH4)#o7nPzT;7@73vuT%8t1;CWOS#f9 zXsN53G_l^@JpAO59QVxR?tR93CF0O&$=%5eEOZ3f{i&HQ8px@X(@x~#ETk-?TJ!$D zT-{vy1kE8kine9@h0us+OPzB47l~7veir)Gp45}y4NA?@4Qv|Qhc#Xb928$-juy=t z?LFsMN<%LYh>rY8;cyU@#ejeLY?;!CCRtRE80Z{i085)}?*LBFB?E_JyIh=Ze2X&c z<+Dz`n_%9YX3Tb#4U=-V8wT2xWKja8%6D=6OCH8auWFYqokFsx>MY4@JZn#OWy+~I zRi7Bvo@G4D>Io_J+Q2F?8GwkHo6sYt0O_8v#^2DLeFfAh!i>Zou`3fak=q*u zCyBGiN0+aJZF80%wd!os6@*1EfMpC8FY)xWt=xE36`r2_!ew z=ICotTH`*Syh%@UxW@s#K<#I$n(_T3Kce+cU$tT~&t4P0co0 zd`C@=Tdi1TT4Y{irluOvtzvlGg^M|kv({( zPCcwbRUV(6wu1KWT1}Ek4Cv~T=epIrv(hJ_vh{NaV``$lO->tjkYr!ku=f4;v8<#+ z-Y8>tEY&em=GfWJ&YezP;&Q3cX`*xuZlg!^3|z;#c|qd)rk)bP$N(R2BGPb1fl;Sb zN#a{)J*dxCcPF?P9ld@-3qW;6I=WCHewAFj=?XRr2~|AUw7F1U+utYHn-FXNS({g~ zJB<_Z3yE`GK|aHk$d%e_Gr;x^VD~C-VfOlNU5A~dC>{ZAUbu&rWU-EC5RkUujZ#E} zFKfvGE#5v=pOg5|?d7lb;@Xzk?UwB?sY|G1$HQxRL@buPGGw#2)Q$VDrRnzMQrA3M zG%)&Oo*}VSJ^IM~EG9*)r%|U$N5}qPSCPynS{B+Oo%lTp8}K5>zL&gHgSRjJT(lR@ z)AO@NC+nqDl^BoYN&cJO9qCHM2NvkO*-w+{bQB&lY*w%{iOr<}bLr~sjft~uzZi~} zA@278P0l9WWdt!40b2O^K{uXoBjFo^vvZXkl|C7$FlK*>_&oQQY45Wf zj@lf3Io`RH_txb5wp!!Cddr^sIT2kR@Oz?9<9;msgj?sL&b7y%e$}dsqBtU?oChBf zs7`BcI=nDDFoWa@5CrbbT&3~d8;jg!gF(WrRBUu+S2lQ zn{#u19<#BsrEt=C6?zq>#FB2%ia7+k`uy_SC~qtYnK?epBJX7{YI!NYkwDSj`n7?o z#oy)V{@_LSsG^l$Ygu+jtCZZVJmIGX)te5xr}w(J#s}0JLv^@Ct+`PK`nvzTtLxLL zHe*5*jJe;w%#ktV^Clgl1uEhdEVd7m8m+$;pOkON+Gk&{ES_42ETHINKO_EW)eGx- zYZ+@dY%kKX4_Tqp&2SoGQQ*-c5TkWRb&~uyff=PD)uSYPsq-vFq=dnZIu^A|My;3a zLzgrWJYTBz=Sc_E2lbuW>0^iHBl!zzjqiK6WH50NaTR8M)QR^SM@_#xnYan&4o16oGB7iqea;oeZ-M%`Qqp> z%TrmoTGqmzb{LwfU|N1zNi<~moS+n?X-@F)5UmQPK@UdQuw(^MF#GiW)QJtsRFz=} zB5TiuPdkuPK$f2LVLSokesJc;y2(7PB9QfpE_wiSfmh{$JrvMp@dr}#e5Bt;B#}(Ga2pqmJC>el$LCt`F04l%ig6f!BU2H21c&rWi@laUv zdUwAF4vuZ{_Z$AZfs~gx)P!>gaMM;4T`H(j6Dj>srIv{E`Bo6SeI!vH{HK6fZFhY? z)7ZLy>2t6GYQGI0y@s|0+TXO363cBblqW85y}h7Nchnm(&y{B%YP>yURa<&I@5c3T zFM#W)olDJjkn^w5*A*Z9g^pE&2EGwQLP2QYyY~+s9ER^!$E|E)mR64szRCN_$Eoto z-2VCvmbm6DF1ic7TUo5eZZ6Th4K}?b8Xo-T=u*pi>hYiHH%V%h^$yYsEZ~WA zi^PnROu0Wx&pIt(F>*{N=T5CCCbbS?Tnb*tm@!EmHd{4cv18w0-N5^L*{N;MyT#t) z6SQKp+-kpQnScF;>|N7g&l{dMs;wDIDy%f`9FOiBG$N93@8aZ!n-KxPsqg9S@KKH-vSA>YCt- zOU@r0EsVCfbw^rimI<4I#@SgU;Dli#GoehDz#%lSqD-QIls6Yn@%ZYH>0;lh%j7)B zLmoiT)#qp`h4j)Npi!_e8&4S7=F(s&m*ke@t8|;;h3O7%0znB~bs_-^oBt*@Twc2C z=mfLF(OU>dsO_MgJ!r|>-4RlIx3eCg60SwBLw6mD_ci?a&ozL-vE1(-F5lkx#QTs+ zGE-HmN}~!|vS^j1++V8tAC!&V>?9F)>|W)$eCX|-_^ffj=l8OP?gk5 zMtKTa3=#?mza$VlG3DnqNd^9Yfa0vRC#sq~na+)13nG|hL(I=B3IiMX1yDG_$qj!T zO7`IUX3r&gY>(&E-;FBe+y3N;hTAnUm5QPSGP7?_Qcj=H(JUI*CX-Zl$F~osgjVNo zXX$S^DXt_vpLF^SkODvj`P(?QIH!jds@DJU4E~#+@xQF92bwYWbWHs56-c4c(BS5AQiMx6l0{md*8n0NLzjDy!Y5+ z)8Gf|Xj-oPYv$i-+=ZUwRPx?7->C_0Y}~QSZd~j2I`~;0>K(F!UeWbFRRt-<9%Q}W zPg?V%WcDFSYl{XJwJ6jlZ&&o>vQ9l3zmIu=fA`frpBjP|#D~S`nR&*5!_n%9>TmIu;Ee?_+F-%5{Ol5P&!2y><5VF` zk=}k5T^kJ@>jsqnw+m2jcB_WB%JOu4U17jCYBXGG-3yVgCc?Y+ri*DvOQdkjL?m6t zPJ6PlvO*zREW0m@+_8+rrkGLRxCKkjXM3)(MkSBR@Ifor>xE!sGR2p&K;B|fh+OOk zz(1?q;2wH1LLYx#{I!d@*oWxJAkUxvBhc z8N>%S1VMAar=hZ%-oaJ?uKS>evfrpZqvdXuq6DjGQ`y1g?CC#Tw#V%;Ou}$b@`C<< z1^!PYp}-%1!gZx%tT@uAz#5z-^7K;}^qj)G^HIf&^x$c!xze4EvMutDh!c`md?r7Y#`)nD0B3#RCyZX8QCh$6`5GDP z!OgK}YQGrp_9u))6}Kyz?jyuw8bD&485e8x@z1e$tgnpKz5F*`sY9p~)+P;?kD}?D zlN|1=vhIAyI{t%&>C1fR#_ zOI;-@@$}8ga^J?pxZh!Op8a-UVq&6eKuFU-=y4EN!ftkxoj<@ z_b~YwM_q2o+U^$zPEPg1Z{rUN_m2kR4!xRpYQz4lOYA%;jn*RFLvuB6a~=JA_tZ1J zkfPnI*xa|iBXcHJ0S!OM2aXrykNVghX1j(1C?lhyv}r_DuIuUhv2voR^)!r_5T!m$ zZ7OmyXN&sQf8RWcQQxskQak9+Q2q1KwAssRV{U&t>_=8XfQl@~7DlG|PZybvq6lu6 z3D(O~uM6n)K1a?VuQYgL`;pc}up0Wv`}KZx(&0$hc5UKx-F6w*(L>RHM0tm=KNg6< zGtjt!ZU%V_fzOn%Q4H3ZULjpEtExdAD9XpATH>yy@BKG=Hs|ebt|f>UJ-0-^DZbR^ zsddZ$9D@D4QjM1KvQI}3@*o$%b@zxy<_Q+ zv&j*If*5qY?o~L(rJ!OZIP0PD(u`Tk)+oyMc7~5c&V6uiVRMqy;VC8L>REZ3<;x?& zPe!?M9G>}Rpkrh0wyWgJzXMyU1MPa(NzZ68!c9FWB81~ySXI!Bjm2f*H|6s#Mo1aD zy1>i{{wq|ZPjy$iZxFa=rXLroYpL+p+^)=g_o}nmeu3Atiwr?$#G?dZEchsJ8@z-l<-^gxq8VY>oD&SM)18HV1wc6f)wL*$&RGMO zcc?Se;uWs)9zg2^h_70nQ;lG}A zydM*@d!*{s6uO3yIO|E)n~d;gMNdCie=!}ruIZ_?LDjd*bzzgMe#hZ(M}PY7FDvqq zdyU_FMq>W%6uB0tVsUP841^*E4_Aq`qmhcDU*$|5knP=lIbo zqu0nkTZSo7trU_%q7=&Bt&cU%Lxt2($12JELrS%CRKB0}F23$cCWg(j^_kB+`j=|o z*y&t(%w$#Z@H5<72+B&R0#gt%M{lRTq@0$3YEL5@%^C?a8$FgQJ-Uqn7oN zsp!`3#{S$K2WJ=GkFdB6)_J3&C`y890n(YoP^Ran4yBPHRC3nn%PgR*eYKn!Kn<3U zVkq6Gt&7lR5}KgE3fLpTVlM(#<)LUHlnz#G`_r?!vnw=U*UK1UFrK0W=OUkq`D)2A z`o`58Bl`$9XDC+u!STIcRJ}uk5-b8NqD|P4@*I|^Yjn>+NO^j)0!ROpAoRi2!u|E6 zX>++7k5gLpv{)#pK}ZqMk^4r&Ob@_^uG&QbK^}{;l~daR!(&L457(v;c!`jlJi9$| z*LwKqKxjY<_*X%Jx#y_NmkokM|**L#%s>TGbkJhZf$oxf$bKJd_H?+O86{0k~vlgPXE+KTA zlD&gR-4W_X1+<;0SG)1nYt0*PaMr0IG6T-p89w@LiP5SFLDWHS8!Y#KpaOjBYU_fw zI~&%HaDt1QN-Zf^-4!eRTP&D8rKHb{ryNWs<{ao@uE zesSUx(u8|VZC5eY)V>~D&WD${g8PoQ!X-#tSJTdxh)yYpPf4Cp5Pv_~a}z~rDQozw z>Q#eg*yU$w34KZ_qPC;EZd^WPVf#gDM^O$wD~G1dK>O@)E!m?i3tgS1Qt%AvoWQ|W zfRZJ_$mEA)70B#y*!e>{8Mbx()*~|k$){BH-CNs97Ys^um=Hzp&E=ijUrj2efCHU>tov&Y3VUYQm?P)Yy6|>!V<)y@Rm zFF2ynnF<;=iCtk?5H1r0k$sI{)n zfL>q^og38K*bjt{yA#J!n#Abv^>6MBdnVoXj3kNSxG9{{ntt)_pagqmI5GsOn>mUaY;}nckQTe z+u7!6*^>{%l?Yc+xd4Q3y(E1`^C%m?i&To4ZbAO_^kuaSGqdY*o&K+F6>~#&t}HIx zYT0baJ8soKPD?LrvQ2wh-{B}p5qUZsLGH=Z8k@m7k!&XtrMT{;_9rH6_G}V~-W^L~ z$%mdH4=nbVe%N|{5+BCUOp!7P`^(#3_(iO)&q3K-{( zjsi$Xv3-m#lZ7HC_{3}|UP<#dj|HSaO|Htw4w3gfksXv-=nfq+EW%z89&jBd2;c?z z`6-%OZVmwnrQumn##$(Fi^ZU`x*co4EfCi};VdXU`Up&ykbMU&0h`8^Dy2jCL_ zH>DrC9nV)}MZ#*?#itZ6xFa9aL9_jDk;E>z%lfnIU22N{bg=Ozq-_9VkpE^v$TNy0 zi6kMXsp`^{8yK!n#f@#u8`i@Xt|%yjqtY8j_zOcv9SGD3P5pnsdl2c1(TOyo)1=hr zrmHo`POdYSmZxkNV*m?ONc5%BiWC)vE@jPK|9Y#WOm2H_Ppoun897XYMNW9gE1y*C zX3H$d{8#;^LB1Ez7{X9c&S>V+G?d> z3s!i{R|4C#^LF2=ANr2?)^vyd@_z=&A4=oZeM;kqno`tAwcW6P>MusCfskLvfB#PJ zafOB47qrfKMxAd)%0R5e@NinbWxX;f+j8V{fV&McXFt)+;Q)Kw%BA{mPR3x;K07nt z3|%-CtD&ZrA1lWd)C80TYwQPr0%ObV|4ttyY=dWoWs1+g(lS1+Jk+zjR%L&jlLTDC z(N9F!&hug8R?8JdN^T&&ge_|A{|xN>eA#ZPD{R@QZ?d2Wc~@e?t=?^eed67^Oip1}{f zx>*8pCmmrEj8K}=-+o3$0tk2tRoi0$r_?`RITHFS;rp^B;Xa(PQhTAr1-zwUzrm58ioBd#R57y~mBk2FAB=`I{gn7@ zNw{eSj}J%Jjbqoqj2rXs-p38$KR8yn8q~!1-!Rl$%RW@}Byb-FyilfRwBSaJurL=#HS_V!JFu*B-cV_p8v-nXlyrLWa4L&Ge;pZuc z(sOjJr5Y3;!gu(Ik`)pdX@Ka2`{D2ZuQvC%ynOc%(S@5q4ku#$NAJYsfV%s3yZ2#>HK%MJpj+X%wtK-Wacwu(^i^Yb zzLn?fH+u#8_Rl{b%-n=v6hFK?K4UCHDQ>LPzof~%YkI$#OL4$6Yw|hr+2of=-UdWX z-*S4jjce_jm696v45_|7zD;ub4UE^(lcfn+rBv#C15Hw>(!E34wWc*yuj|vLYeND3 zL4yx-LuMwYk9H!a181Bk$Lop>;7t~$5;tsHkX90rs*4wo==(01R<>w@LbBOE*y~r{ zBVQwSwZKvO#AUhCJ69cyFb9s*1v_OJJGxSrJb_}h99~w>kzyDs5uQ}_tKa{oo5WZB ze%SP&Oag8BTI&g#`nuL*NJ1fVCHKsP%gXll@54;Bv0@t|)0roXtrVeh-mh2TS~^04nwMx{;6O#9lMlIcGX z(|!eF_D?T@gNFPSd%N6>nyKUGMqVl^)4~$=ihjF`YIFU+*$11CXV__Vf;<$0gQM^a zg!5fFMw{s8SNUN+LkZpyl*4S*kRVvq`YFF!%eTnOWgAUEZi?keYsF? zH#Wf!L0KK?fDM4d<8b9;u4!+FEN~8Pj;o@!-bQi*WVHjso~p{___&U6LOw5wYL_pa@~ zhP}w;n|Z@~pAiA>leItIa0ZMyO`*2NHtn%^uNy_Z0BmV6&}lh3oz-uUS1MA6?2QZ_ zeB0*=UJe+zqxSX9Hb8{H;h%4cLyAKHB$Tfm#;Naqj~JAwvVLzx)!7vA~3CpAISR`zv|H;nwI#k#N~%US5aOARh_>J^Mik(_ z`isuZg580E&bdgj59zKwjjnZ{BL=e@);|>+RX(AzGRaX?d4JDut$!v#eXm~L(ImwA za)YFI5jd2iar|Ynj#o}Ht=l_M2$6&Y?~BlNp2S+HaB8-p+YHX7B~z_KMO4v4dYlW2IW~5>Fph-$V0%i;HV^>Y12>XlZx{o z`s_h`-5ylSPzTk13jzk1E3KF_Dw2;ND`6dQzkg&~kuoqdt+MPga3>6eIsm zj7$}zRQU(A{gY0Ev`U6qN#mWJ-{>KRC}u6K-V2#5`97F#LRp2OTIvXEX=#UD)xjMmc8MwJR<4g=cFd{%LM1lkJiOUB~ zmE{UAERXx>m-UqBaLqDp?)%aEjn`aW!5e9$STQ~B$AYrfs`onoPqE)VbYf4nWpWgd1EQe9j}ZWMIMK` zgIuq$gR{pyzE-}?{>pW+=s4T%YfKyppTAf6$kd^1$>bx9%>3DGhU|eoXgPRMdoW8r zEpknZZT-x3G8^a3vsR~hrt99>jOa2&KPizF5h>}HkZk$RGAfOAdpnT(e$mHvYTO}k zA3qe*a1%};;GIRGQMh!`1|lcTM?M-XAjHP+ya+hzA1fD4bv)kb{5)Vj7;OLM?*~4K=A&u}wZOxP^;e~r zJL7*8;0LD8xXM(Xt~vcBj&u8sj6h{$1$HB_i{bl--$Pl5YJOsEBlpsh5&T=OdnKm?GBOPuhz!C@-Pks8WBF`GvOicDQG~8cpc?XUa{|)xcm@KdV9+%na0{6fzNx2pYWP1q zB3PIy0H`3Qbb=E~*kSu`JJ{zl$TWlZMaK}996kLUWe zEKKOf%glH&_u78I!Ge962yffnKFslK5LeJbpG2aGCU^Ktjqpe<=*)CyqQ2X0H(j3F zP^cap$iCL>J(G9znf%C6cm(Uvz&gP~Nlt^{A@TicfST|mRC3?tP?c+RhDb22^r~&ZCv&2r;SPw(~ttyNrDg|A>`!G;qDgrd zQt@3^pSXF=*jqzrHN4cWy)8liNHp7zAbjt18Xw{d<41n|W0R9B_DEjs?}WtBkJ^>S z55K#TfWXD^gFnZ0#t4mMC^taBSHHM|QTOdL5XCG9`98QDlv4B%Eq-?6c?p>2Md<_KxEBSrN|8?U38gePfDIdV$3rGe+4f_iDV*1kqvjSIbqV0q0i^i*Ho{Arn};y8eTt+YbYvcA#5(O!%C}|s()D8q0d0# zJ%8JMU4Ot8Sk5X|4wv)}{e$Yf2Y=XGmcHsMMq6lb^HWe#oD=@Ue~m6&aBV+;D{z@_ ze)d~4gphn>7C&y+KkoQ}@z=%nApeOM508`fPUfLvKMQ!ca|PORv3Nq~4SY!roAMAC z;oGSF+bVH7O*p}|h7_#OB6*Jc3*kL#zK00FdP-5h5Yf}sjqta33~iMbn#)ssuI_jE zBW%5zElU{QOi312>bXdE4nj+Yb`pyw`Isq2b@kuo^4&li9%kvZvJ%IYxu_Y(RNRmc zxviaES7zZYy=dCQPUhy-qYbMhf73NilDyEx)zO6)5_|ox(%6@pk+xwzrW=)auy^#6 z6O1GIQ08|(udOY`m7kS=zy2^L>h_krq4SNt;NXStU+_P0Ut5CKW`6|Xd?tgC;E@O1 zu8b7;$GAH~h|h^&;p}3biZ&({kR-gjK(>)|Hui%}H;3kUe9_*Js;~XUxE<82Nml?f z>lf(~M^{$^2pH~jgifo2>?3palp^k@q#}-yw2_nVfuT;ime7AO%J5O9(eYMAbDy4x zWQ_sdF$D@4U@1fI$NWor)M_3YclWe~1EL;U_eCD`aw4$HtRXC^7=gK>bSj zUcgV535Th)j@MF_jO=_MbSJdVS)x3#m*cOVT+Bc@=`w88^9~Em4WZ zu%r^MIkeKeZK^JL8u^=zv&a!4+_O_qNiVOW*6%9zhBypdw}#qECeN>?-2 z{A9(r2SDf01|>pw{-udk_{xzdep1>ub1qi5T0JeqL^bI8>nkevhNc{vtt4R`5Df#! z99%MPwsylVZ>JqtiXx`BLrjqtqi6W(ZWC*T6B`qsUMJ_8Iv$)pJiVy2vV7aCYsBjy zwd<&ZG2(_NnLK+&IIplm_q8+BdP7pz^gv5Uz%1Ujuu1rThE&-KB8l)Sfg#{JqPNhMmyQI6NStO-IknY;&TEG9_{kXos7YHopGjrz5%$ZMi z#gzqj6Wl=q-2MluMBhvy-2CVS_#Z9`BjTCgi{pJi0Q8YZ%&6a73-lHui5 z##YTGf2Ap_@xfunaJzub?dH!uS7j};3nDGevHqv8x8@5198c!C{pqPVg-3q9uKWJv zxhtGfB}Iq?XCR&*qx+Ie%K3TByzi75%kg!z0}FAv)b}~bajEm-@{VDy`_)5K!$U~M z0E<&+F+}}`U+_;`K2fk6=>Tr=vIsr;2i%2CNWqWfp4)3rod#tGeUUeU!Hx1I8j6n0 zuu?f3OvXHP#ykN9K$8BE2e942$4#Q&t8n_YF|P@MzZ zRJ5Z?J|v93!`wWv)3hSd6dW5=;h{iLSiUL0%ey;V{bllr5d`0VyOCf!5*ugx*Ux9= zV>MP!{kr(H9Hm91`Jfzi|DrGm!ek#vk|97B2>}Wk_P=g*RCU4G=%@SmG7;-O$+FVla}1R4fS>rS>-3M)K$#cN^am5~_I4U}-2eQ!HS~ z*cj;^cCBZ3^EVzR#?buwgJ&fmzxjUvrHPoU4k;NT`;Q zW|~*TuD3c%^n91!=6p|5a?TL@{moR-Uax;gig-)O-!QzQ=&})B5<#wJw5rs8sKW<5 znpm6nMnbPhIo2=jW8Mt>yxPjTK4=mCJl{3UMR;2fHQX(0Jp>}IvSq@6960Hr%mVsq zAog1D+u{;~78D66?LGnM@>fddCLU&+EY!2NFPK-EG1LujX^NFT!@>miaEwnw;h+xc_Rn`~3?8^jLIZiFFKkprb@$3Nza_cWx3DGtCl zzN0_ZbN~cx;(!q>#Kc!G!Rxt$Uxc(gak1bE#=o7)(lJkA+NH%zCyS@vj3~?$K!0HV zhg(1-3_C;GfyC?{A*~1E0bmfw;A`ZvHm`mD*jOQ@`k9w2X(pteLd|=9uZfpWWj=4- zm`)(y&JJXhNI-!>r;Sg?5z~$sh#t(bNN10TSYY1}io8+4gvU9v@s+&oP%7|OFmOz` zbZ(m%D4oic9DIg2xRtuQyPo#^Bi5v0topObev!;u-K)pZlDzhQ{kKoO`jPy-G*Rw< zGM~7gd0BXW6`C#_^>+Hf8o#?MIc>9TKO1!%SUu^eaaxSnZ6#coHZo}_S)jFP^QiCN z6S=PasSW@u>@))}sn(y_Yil`&B?)LKQSoThp`+&9%fCk-teG7fc=UcBVX_?5)}#Zz z1+Bg|gdbwp_mAD#kVE5F27uZ2{tp@8=i`R=;L7M`?UsXMm!cpLvuepTV-!!@9(Zh+w$|W9_ zoi)?hryqwgeM?J)Z{)PqtP=1Uf_7S-fRK09e6$U8VmX)&FX#sKq(Jd*83C_l@($ev*w=rkf%x*t{wUE4?3h#YgDeDQL%;8v9M*MC%yRi?L{#9 zj8sKVD6|t-0H7Hs2d}oTqj-=`_JAASbpT9oM-P&83*G*qe zzxdVH(jxWU@akvkM}NR|piU{s8Qv~-=^1F4Idz_yKpa6`lvMXL=h8afG z)O^^O+)_$hpzrTwgCUb`Y6?y$MrTONrq7z)d+7$#;MXKWptFE_INP={`*7SSo>S&& z@h)aco|)TJA(f&vzT6N6wXMYYM)T)pkJaG!Wx-$;^w*`{Vyh_DXi!hot$W0fE@4WF zb#`VxOVmoFs6`T0kBpeZ6D|sTb41+FoAIR+__udpX45wy)yOcxS`q9o8KyCzIyiVa zykesq=w%^+MakjpRZtedD}9(VbmKp`m^im?Q$vp*qaGV{I&kiP+=U|B3;cY16rl6* zlz|bZg$m{zEWyeXCly2_5o0)dI2le<1a?WD^VtkX->c?dn02~qEGSijyA5Kmb(66f zQV?_CvneRB;lE1+HJ=L+> zxdK@~SP7r?cU=r}%FPRTlSX7Z7!QA$kPME__a^LQRH@XyP`wW(fO<_8emyXl^lk6m z6qJpY1|~d=tY8uUVBOHliZE9neuAJ-zxXMN{Gg;s?TgUb@5RUV?Q^&a(Ziw$MQLS^i;o-+l_FTL1j0S_rz z77toABy3Hz{H0|M<%L3saoHeHbnMW;i{_<+DLNxf&>~^QxK>2GCltS#=3W}~tsNmb zB+jdui}$&o#%Tt|%r5(a{`!$Uqd|>}bqMFjTn#}vTxmPkf8{7RRUF|{C@n?|fx*uVzn$9`D-825GH}*`A6bqc87p){79tv@O z^XoZibeyi9Aj;)8`Mdvof>dBiF)FW{-k0cNKOM6ao-j}0xrhIO) zNqrt87%<71WW;fGwjlm!RC$8gd?=GxD7Ve%=dj~v1FV5{yD5?;3)`p0X>!0{sZmm` zyp(q?Qg)7_0IaDaeYL=-ri=q{_U2Qzs3tMD91W#_r>4M>>-Vu=tlZ7qZVoczE)&AF zGG6;=fgv2q$cbH`h$~^jh+=^Urey^Xkx@W1BLSluIkf!B8o3HZ1py!IiTG}r2Rxe} zOyn#P(9>`*6YW5Ta&t?-m%+ZGB{H1zQAFO&&;?%aY;jA9HopKyqnM-|)Ur_gtKsK_ zK=E8UJmhXIE5X-;CIYDfGxQ-Z=-X8?L-J;dPyZp742fjmu8VEtV!HYkIXDqn0ev0A zwe#_f*E?zObAeJe&Dd4{A!DALa*+F%h(h}568a>h18LvxYyKa^nqcuD`kp?No=B#j z2%WI?bpGekty;kCRYY-wuJCnni9EcIhdraKNnBThU?_K1Gs&-VK4MyJr%}jnd~;Bu z2l#m3CA0;tuF*9iG4x{j>;CqWraxs(xb4gIF5gWm~FNt zaY%F;rgqB|7dV{#j@+g0ih``Rt6YqmfLu0vpSPjJcxF#WKV?q`&RK2Pxb(1)aCSNX zv6Hm#H+DQCB&0E{ii(31o)Y%U`TO-ik)(%=ago8->9nBhHbi@uiTA0_AM3Mv_fEFF z8WHB$VLH&3W%9zmIcs}!?T09@M^mv>Ap4a_X@$b;d&3=+oCFiUt8l#DxV`S<;gHD@|5St>fH4t4qCqp z4DzMJ_G-ev%W6yxC+ExOrFxk^zh9eNr#$U_vhxHmcfIX52&|>w%92+<9t_<5=s0xu zqma9NmjPBseyJ>FhCf= z(})k;BfF+lwgzA{?fpn}XlN+t)9!*-Bm;J1`Ovk^P=o{&OnM^9Lsu_4 zV(B|TPB-VMnV$ z9Tt%ZQ_;$>4RpMxxUjH}eg>po-2V^5JQTNslg{iatL;&cz3Gnz9i7reLN9mvi=?i9 z1YPnkk3>_XJ@3qw8q_nythA8nM_o0YVlgnnmIZ=EXphoca3G9YPxC+Mf=!b;X#X*c zt89n7koo}{n8^pzY?0QwMk%q$9jx2YhTc!;r=hidX ze&5Kiq8fBj8##L4xplwMBY#@+ ziNj(;UC_>8YXp2EtcVFl$w_-K0K<9Qw>nBDs?N+6XcUq6H}7!1kd7iuF1;4cMQ_D+aiO(p#Uj z5q5{AI)hfvPi%&1RY^DmRE9ztAP9HHcAA>;Q~~LjkBN_hgP9_&RIcr6*bp@o1qSWc zpd6INUB+9&HT`H0phhFnf&gV(RP7#52h&JgHl~VX&o~rXl|6&HP49 zovq{KMBzj5xq@o8{I#^m9heAXS9-DL>WHs8o*tzhf5m984zhv{zD5o8dj73)HE!lw zT%_3Kb{I8&MYAxkhMBTT#~$RgF+I3*dX890S-OhOs}H#SD|OWZD_UwB`??=AhS!cb zJKsaVv)C^JDeU-Y++ zNgzDwZcUaxcCf;UZHD_X(0_3TZCDr!Xc&XilUwSv>Cw^w=(3sA6T2Cl|D`E)Qj0Dh z*@{_wRni0p;mm#KXo*GD+bN2Y&&l{_mjjMmX!TbQb#noZ$B=;H$sA7h22OVDr@1Wn zWZt2I_EU1=zIXGw_+Zs{t;{Tj`&Dbi_|m~9R@RV}x?*)1&0mwYmZGRB>C{p@I&y0p zAHt#Lr~;;kso8AbfPWDMSpXr#x5g4P8R#{)`f zbCJX-X~66UPz>+6@9)57h_#|@E}3_aapPTY0OhU{4`YNbk#|ivnKvF%e*;^!K522_ zLN)$MEEz}RO4_?{g9ppkS+|fPeC!CVpv1~L z{5oO>FIixe0_GoF#>ZhjA?>KHy$(1?W(JhFNMy;Kj6)4=s9+;caV@py9&IP%(*wK( zA5;Qx&v1NO%CxzL=IKEez$?JtN?u@wzzZs6kfdM~i`R@P?;kaOpKEJL&n&r~ZX$SE zS>RGyy8ljDnAGY?`b);<_yGI>+XTK7tJIW{?`31tJ&jEwCedR_J!Yb!{#R(^fhXJnFNdv%3)9slwk7_$0bt`#POywy|AY1O`_HSo9+2Dg1Q#2-)J71I z77tC-@85is@}CW$``Lh>7-bQcK>psHyM&Otl%TxiOsAB9Ac%nl@v_RmulCIM@AP1) z+3`bSTn*c)mnu26`?uiem3QC)JOl33yfirAdL|9Jk*oIxR@$YBZ5@ps@(ue~2kLD2XHQEe{J9sBWmiM5ciF0^=^^ zrUn?ke2>Gyky?ATl3VO}dgx98ENQh=Fu{mMnz5IDu3Q~HP>%oV=zO)VomzRVtO~?8 zMzHr3+p#&d>AHMF*rIp7<4H5q!=_+{v#!n8DU@c$%B(s-tZ#qjnL1;(5fAaP7cWLvXyDkNuUOmAe^zvf7$FYyEZkaam>q1})80R7n2$ZsSoNC2Wp(li7T0eW{-oPy%95>kNPh2`FtOptqn; z4hSyN!xYJBt@|Q@nu7F)ef2cxa>AE2LuYa>E}SN9&TTL`j)dcpUY$|}*bR>)2JqCU zh$QBq#1FrhcP??AWIr2=x0y)qQk1ijOUk90=x6ux?$Q43m5KlUEE<_%E|+o4a|NNz zm!Zs9pyf+Y)jJL|3kKZ{2TFW^(vbF*d%;L;*a4Vthd^odPWnYN&pmXj_cv)Hkb}_NCwp23|9UlG zI_M}N=B9HwDAZuuGhJQq)0-!UzNOe~;@jfeLRgEN6=kEd(_z14b*kbe9DkKGtk_|_ zXiiP{lw9z<`wfQG|Nazht`YrSwba)1d3k;~TKKkU+LN6#>Ff2KPWRzNeY75jd!vVH z-7p&)L`(ziagG7ohF{Ri@$#9jlxKk}BRIHl-n#Z&mrmgO-QE^rt^E3aJF8MQtEiwz zJcZH2acUfe_I=}kqk75n%`!SWsX5f`EaP@C+9J~ay~LC^J*Y1tfM%xjol#=3PE-F_ z(5jO{Pk@x%p&i^uU*JQ#Stqg-j>V}9wQRQ@GVcQ}dt_p4k?G)YU`UA7@aWBDYU`I% z(LvezrMbcI&9ShsVH(H5FC4Zcme0yyaQH5pALu*1ZHHKztsmT_{Emr)+mh&mFOL+g z#1b;P*00+8HxlVeZKQygrTKgHLI+W+b@ZWkiJTzc*4b+xDLnz*Wb?(C0pxafo63%n5aQrjYaGJW2FBn z1#FJ4nhx~?fq?)SNzprlfhi3EgbaQ7{Qz2D!n{+!O)eP6F-2!$1kYJXlj&WEa$>(u z2phZiqp$RN`mky35pV|<`2;7Gm+H3FSsQ5fnL7iM?NZcl1p12v6#C zE+J@hT`Xt?UVfXN(RM*dd#fmQN|B(f_DIc_rR91wAw59U$FxeaWT(r8p51Yf$9u=M zw%Ay`@@he<{^DD6R*(Z?K5FFVbt;EJy_4tkpKsF}nz|Vie6)Am`FRCp%Yz9d<-#Jo zy|N#4tL}e!k(8adf8D{|x+8Vf?PubDVk32Un)Uq%n$6-Z^!=`k{O)E)>ZZGg93?nA zXHyAP36BRHx?H~HN^csd&`*gtUmQ);y4>2Lq!g^{ri8h?mEH~L!;2R#4PBt_oGqg+ zNv6p*AA+XW;Nk~&Ve|ZooWmvAtPKUxb(qbpPDq;sv(hS*8k3r(J<^)H`=rOECpavU z7$0@*BPA#B{F30ish1HMO%4B-a2s09=lHSxv60C69OQh17j>d@>)BXU~P@SXY-hks->ldn^xQhpEs7VqiDI4xWlJu_H81r0b;^s5ZI zn=|jFP3bfH| zu5`fh$(?4idFRbV>o$|W)1D><4Yupo`}*wok?_JM)Y7-LMaTEZ>y z^(7CRWBQ;Ekp(^_;F{p+u~szk8QNNT@xNIBZCWui)y$(wekqAQV116{w@hMu<>p8# z8wZZ@r~sFW2j}J2-o~}Ha~@9>3MW*CbP64;2FY|D{x^LF2(1$6=$i3>xd%QbWJx5A zxGG2d4mwWGGHD_qAm%|uJ-5SnZ={r+ur){^dDy6k*bKTgR!K~>dK^waJ*))Y9fk@^ z8Po9g;DYH(y-y?{@}ip2LuG)pt3Uwx5Qi4p1!^z~Y--5NQbiDAJfEt}f0tH)kFk;& zs2AYTEBst>Zj}gVh-5zTmJU@FKAy>s=_ojIAo#WdJu zMdfTXdUtnePdgyF%rhY4ETfgc*p@ybjycr6fx8Rv2hSGWSwE{!VE|qVtiI$1K2Pv6qpZ6{SWEEOIxI1$>j1;mX_*}}5}lI2?|N)+2l=B|Q~)qUf@UtQp| z&hZ-so+sbm;3qDN5cYqmWFDmmMZ@bZE!U@8@D#_#@b`BtlkZrcGDSztqgSd6wL(Wt zGl%ernJWqc%&w5CzpxlhElr#OsTu5#acwe_OBoG*+^tE2xyVOV3H-3`%;cq@6g20t zR<(SDhteAii;D~+?p{x~av!9}0=vEKiM1BlV#@coHN6F5F)cZ~EbAPg@0ftMQz52B zf-=%uffJnlVuwdEDG;wf%9)5dSb^o4M^8?I;u&8*@^)a*bjjvxV18NUVB}=He7&GZ z8_7ZpC@nGAXHdWa8kbuRqL4ka%7;@oBv9M2_-`w`o*3)awpS4zCk10=OQ#@)D^ z-u+o-ZP%|yV61BOxU>e~HMvyHCBLJ}6?X`S>>Vj4;0f{}9oj%yjr|vS{{)@oA@*A6 zE(@Eba7eWsm~oAglJFbNv-_WKsNr6Wwq0dm^VsQBPpF`QT{q;VUT}_2#N{>r>>vLc zS}^yAVa{ZMJba4<4u;9BRDix2n0e#;LM`c)xBqTB2|ivR(cN#Z>VN|h1=~NsGAH)4 zYCX)$=Q5u8`tDt%HM8(%4&l}+Vm&A@A?6rN-`@Cuf|SybbMlO0Hv7A~qbHQc3M1Hz zs@Sp2De=a(0gga2c!HF{gsb@$mQHk2{1W(v|5$?qgT`YZ6wmx0W*4}NVSrHl;Yr4- zgzN4pUT$RhpODZsU5pbh=fPh9k9Lx=Ybxh?slbXS{UNaF7S2b7L%=e_#B=QUTqA}I z&S_Y0=L!Gw&2uQH%wXg8xNBMaa^itWvy6QmuPNyB4AaD`&s@pjy$&f!wGja&O=io6K_jY=zV35+TH{x7^$ z5DH5Led$Oe3`?^b(Sxh+qDd`_RB0@NaZm_)FFLjzAF&NOj;tVM+Vk82^*j5k(`5VW-iJ%U3*}Vm$3Ubxmy@9ll|vJSKJuHGxDOi@r{I6-MBslgi1+|Crsf z0_FWZECLS8LR~}3l6MmX`W}5~m>D!URIwd8)eaK^>*nn@1V_VczU<7)Xeor#a?jAvzXJWxFGT$PvGl_ z!$gW&kMt$U2n`63w$-85zgWt~OM!3nu6;It(H1u8QG33kM zaKBhd3B0929L%;7Xu1m&Q^G1jUEYENJV>r9OIE52TL1Wdb`k(ujG#I*XRm-(%n_3j zkbwPA5$Nt)Q@7b)E58N;+pcaDX=~<)!G2*XCkhytDyFSQ$S1PK0n>m&2cu-2>hwJQ zfvx3fi#pUH?1HPEKTd0JL8;dn4C9QWfs2Y&)feNC0GD$}=u2wBJj<{8_B$#&_8C)V z2JhMH&xQ0=2Fhw$Ym{jUJ^4AizL(f|R_g-Df}jg}YurOHJ=$(JsbpDx$33=;EzlP3 zYdv@5&9avqig=&Ve9qMp@CS^^!K<5!c%6^kBW4#pPw&4^ApLQJIxg_Nup-fa(e>UOtWe21ID}X zhm&9Xh5vvO59>&qMoMdv1dA{x#Fuc%ydCKu%C%E5cOAr39m=hBi&v?%sO#=GHdiW_SZ=n7_X3e{7C)~sI1za5E2$*U+o?ZO(9 zXB1+I^Wm&!Jbt`M0N~o&(fq39ynNDb#(eU=zJa--Dy7xk@^FV=$-ZCU?CI9d?PXw) zeQAKi`Ov}>iBG}WKUW!Ic^I+<@Rd}lo|D|bxy_aeew(anXZ!uyz~!;)ZRfwWqmnk8iG@kLS8wDdX`;+i zaXu2$M^69i2I7WHck@WcD7MEgUTzhELA- zv!;GW(suFsQo2*N7}n+S#-#jxv->Rhc0i=9HPfW{cxMQ%RVSnC@8xrG(}1{nb20Je z#C&5J(L#3(eyDGOmzTGAvvmG0-dEb96~8;YCNC%M*3a??CC)czm+uzzxnq0-1ZMl( zB{GKKMznu*?}%sbB4!g@VqIcmvq!@j$Vu0sNm|bSzoxnG&Qd017}i|*a?nEYj^qR& ztZ%Fl%~EG=LCa$SG+;U9;&;d4*&EZUjo({}8rD(|cl!3q5NrHt`GV;9y6dvPiB?LT z=iWZyP8T3wmvzbU{zNd6B+a93G$*)kooYF&pN{6k5YS|4o4hF-i&n5#Mhec$x+r6( zEC{;XG`b#L8qj?-B0?peix$<>h@u|O(>rNLA%e z>lL>B-0!N1a-R=In{K0Q?3{%NIx!j{jSMNBBu5+TUtLzymQODakMyZVe7Ixkt-L%Z zb-dF(6!W&D%@y;sc!VhaA6SWDKmHG_i1$H`-OfITimiu@bU&ZtVWK6Fa=GTP<3Fhg zHFk@CxSn;_vaCV;;EQIZIgci9ECs6)K7ksm?|@OIId6=JpAaT$@TLk6gcYCkNUkQk zf(OFE#=*flOSQ$_b~RIVzJF&@eh|7U%VdO2-uXagBB$}P`X5w7SsVq|7IsT*)5ikW zr1`3wQ=TxYrwd?rgmjw9m`OuW!-7q-U4BFm_^#ysA5^#FZ*P-A__{bgnMKh~TWnhm z=ULHm{&X5%vkKxnQ#~~}z7lC`lx&sI$THT*GNk>^v)86cXhMCxe@#+;3wC@fkr<7b z9dqIa4x%ThFxp{@v-<0c0755|pqdxQ1@jKZip%bZ`SXP}DE%sw9%pVO(~D8pf%w}V z5|x2#?6$87>{LGy1w(ZiOrd%ggFar>Ui{J`bEC8>BOykj^0D%Fw#veIS!VJ@1_^RZfbQ94}Yxk6!UV#mKn8jY#0 zGit@t(@xU=e))opV(@uR;`bBbcrqS2u9`R{gT!~gK0;p`FgeaUsCyTB@Kec;*RZA} zn#gsgC@UFZ4!MLro*lQ>f=BPzBQBrO1&wSY!59!TGT&5Kt6zP5IVOqd8Ix2^VEH7F z_Csz$?lr+CZ@tvr@gTQS?u&JtsFhEichfe1j6UP`%iOzkb7K>yH`&}Q_c^{Q53o0I zY&w2zcrIaGr0!j7JN`$Zm@0}UsvFY+HvYpD@(>?O9bc9N1|>o#k%_%2+`nGemiUHP zSG?=AP|P;h3}M>)O@xK{cy}egSGM)sFQ_Z%w52V&TRQAx$486@Bq713vSwyID@uga zA`>v5ZaDucRNM**!6(IJFwF+>la+%LJDD_@y9H57?j&&$PIn2}Co2m@i*u2ejdfVK zP)mL<_j*EZLZP>@_R-|ENzyf4$zqFTymGpL08w{{shJEf?Q<$J0(?x|kT4i+!^Hm& z$2yl+q{*a-iImo+E;#uG|7GM4JH+YsCYdd@dG_RxVu;J+6qP z!;5Rgx_!{0SX+*X@7>MD&Q|^G`|`-Y=vV~4eV3fq^J<5LM>j>S2RG-p;#fFXjnYv0 z`o&>eVNP8p&n@rPs~%^6=KF4%hJWyx08WJ7qhlQ_={B|!K_WZxpH1m6-kiKSi9i0} zI$As16+A`CWAu@t1VTXT|MHx`vL$6{yg2nkaFsWobSz#+zl>?pxOm)(5AW0d0i6i( z9G=M-MVeFTKXK;EuO{Lg%T_*84Do*dr>8_qLUh$v*j+MdbLz}m*E z&|@-}HQi}F#4X!DE%aRFO{W$|mz8GaM`j4ACnfi7S(e{f#-3WAbsa5;hU1oAjx(FT zM`72GoWBQN?=ESvz`qbA2-q_Ctlsp_whVA{49YObH8N@WTbyD!9ltty@kX{a(+HP5 z`jyW~=l+dRf-QKci>?=IkO-W&`wxM=(cPGNKmH#ALwvcF>wHibesBB{}liCfxM#!T5v`J!J@>U>4@9Xj zPPPKq^k`Y)aX|P}dKeO%d&W^TlaGgshev`7L&IuW$tE}YEl>V04yIW~U@)UGVmuv8 zOcX2`-)W*LJ$|!D$Nls4boa^^3A+~YK(7FRxr4BcJN6;NeL6zKtkmWFdEjiH{~^3O z#VJ#&{&Oo&9SH|3K0%;seb7lq={|xIih~s#O8Owt$%g;Yq8B5%>^I44EJ?SkRjN4H ze9@>(EOsc~Lso3EVdJ$DUUxVT%&cOZf515MBrYLkMFi>^+?eZA4`C z@Fya4RPB?;Xz6&X?Me+5=>?6X{sOTuqh(?ooRrP3f-=W`60NyT7)Fv}!Xafq>75J{6 zOMTIsE}VS1Hp@u9uG;^66{qp!ru;gWq4Cy!uO>|=!B%1^x!N&JCj&k{%vHcy(44sU zdRGZTb)V+GEKKIN3v?Tb{Ruk+nRW;Iq)rDG=VsDP{QhpW-cHkr)Z^l!(DVFKA&-Qx zij+OErW|NH-S?l2xtS|QLQRMIzr;)S0-p@?F(wQnXLdNL+Gawm3>^(*NdX7}J~~<$ zMnPC`d}qAN-z#{rl(*Man?#q?cWDxgiTuX%mx3|oN(n6Lp)B`hMSP3B32Q3MLFDX| zHv5gC=nK4rTYlhio#f(4xKQMD{Qz_`K7>9(FuiK9a@g~e;*M99)1#rgD_^ew(Y6*z zUoVLO5Z8`;&jW@s=nQCYlB7-rmyeB?SMbI@w|U$iy4&w}ax4CAIhDF4m7oWUH4li1 zo6^+l-6tG{KUz+0D>?<9nY@Y8fL=$szvfa}Z$$LfX#SLF(T_Phw?SJ0q>`=SU8e*i? zZhWf!(^Y;VTd7d+Z!MqtP_y#A4L17xftlCyYH6R6;kH_dWb&dvYFOK{L%9NcDSzZtm+qC{DQRk zAuq?pWxY|SeOjsN&#g5}K~@1f0XrGxoac9gL3eR&2cMRIaEnjQ>JkU0(fa?s`ir=n z9LAxS#?Jo|3q7$El5k8*a59wr7I-9eM{&QfZc5ZP7S_)k+$lW{gB=vCL#fiJ8-5g% zHaCm@>th&Vc;z2y zPnB&%w7(aN_cVfgykZd2aQn15JyH=9o!stO`IKl&-l+aF<7Zpq-x~2W{b;^!FNAGs z-xCeA$YO)Y$Kv6N3<^rf%738FADN1)=}$?xH~3S{E*)MI=PCu zd*0QP)&`5Vlb?hf1s2C&H+9%`%)JJ+Kqx_oOf1DiB=N!y35Q-@6iGHx{T!%Tp1uL@ ze3i>?J4J;r@JZFQ8&-%hNtyIlM257I+iqu_0*+!3{Fskl;77ZennOYT%27hWicZ`v zZB{XXN8r10Qs4r?#>@?t<^dEJEvx_|jJYE|B>q<~IOTO)B-M7?zs~;G=I%$i0bRT@ zAuEJwq%_ON|Mpbkr4Dtac@GiZV;yQe*B`3BIVmTyYruBmn|OHaq~6bVh{7lGnhnH5H}J zXMfjdcV*vL<$D+ffG)il5c&ioYoRMFTEx`{I&y3j@wYFQz<71r3*PG zp5sDVBi)c#+SjS#;vnOS~AX758Ot81~N>{k#V6{w$8SMr~5Eyy%$I=JKrj7#+ zWpPq96j9WUl1sL zVUS6igwG9IYi#9T^CN!Q7nK{A2TZT!hZ{lC`@=x&gm7V_G?L~tLSoU#eT&Szg%!O+ z@@24f*V|E6R}PDhZ!oc0q+v!t;33DsQovQ2*)+DZ&3;C%lsr1VZk)N1>{{hIaPOnH zcYm;=x_gI-(N;4w~pkvr#`=;+&YPVvUM>cN-O+ez1;;O(-};y^p`5! z?W;_1KQlYmMz&%w(YzA6Jn9#m2lRu%MqjXlYt$wSM;zwWwhN=OSpiXEg~*pNy<^9D z#%!db^d+`|bf1M4=K%?7aOnE)m_V~?YegVvD2&&vrm&z8F%1TPf0|4Eg5yi7p6d&8 zorNeL_W*zKx~*BPM;210Bl9J;j?uP%OY4N}(@PaIM&`#3U4A-(+E3YgK0Rt?SE&g| z5rtG7@_BO|P%j8bV>0HD|3#qnwXCb?#Z-_Q_(o%TZ(!WFD?>XprY`cVdY!0@l<0cT zAHi?uOxjLq+7Pp6!v3M1p&yfIWAz|P`2T1T2tYiVdfUNhSQ1CSt&jl-u<-S8>5BY{ zitPM~^=wO}%iYpRsRPgDG2JP`yY<<-Q^c|E-@P~qbXgYen^R#g?@{G1xw1(QzEPWX zKUVxAGGXud$YO88-gm^v#j4EJ#T0rnD4rYhTONz+3^jrv9GyXaFmlu-QP4N?Rj;^y zy-hS-i-#Km?U8d!!&37r8-M1Tw4igt&$k_IeVL;JB+39c^sCt!PR$_j3~y(e+|Y0b z9(sSTS)fBoJ=FP)mNFF$P(Upcqn;3A^@zv+z-v*NTO1mvLYeph!6sjC@BFp0yCE4= z2M@`I`G=dD#=%UPD(uZe+%zM)C5_+Wy9)$|qV?#Ha`aI_pHXBl&mV?17t%u^44KSZ{tm>rlM5OW{zN~%?Eh0w6dC5YGeYBBWB@11JieBZJ%y~OX zgbESMqba>%=;wYcBO1#}ydk|r&sL%bf_t0>_2Jpxc1vIT-Li$YAqY0K zlMc~Be)eAKc00b`OJg>lG|#I|eyVG-Ef!O6${Y7ktjrr{rJ#zWL4OL9fiPjp@DQOZ z!%W}a%(UHNX!y-hOsf?>m`JQ-DC7j+{jo)>?of~-4)~L zDEW31A$0>6ARSb6%0RHm$XK{e+B@Tp4IO5o53$kFdV*JFc{Crga7A&=Q#_MTZFK&8g;qkKELel-{0=_o6vnC!_o>|>?u`p1y ze053J)KQni1pd5GQqr{MFIpTPsB0X_y2vvT-irZ*FSb9d&>CA}U4nreUE9WpVwe%9 zH4)sGMke5|#un58HBa4eD>A4Tv@XjAJEyNUZeo|;g};*uuYdF{sb zcJxo_%vuX(EcEGV(TmGQI6s>f^fW(vI*;l zznPWh)$Huop>N%w*bgBT+XeVoMC;Ix=$c#uVWBW6L`mca_GHcLIt|qfGI`rvWQ3t+ zea0-Fbnv262DsL{hN_03Ym%ToAbyNVoGn?k@pHe`hX2|BW&uXZYz1VgxI($vo!umBClDCTy zm_CBixGV9cghI^t%QC@YK%agtod^!?_O~ZWap# zR64|HyTs7B{_5)4Ztv|kiP~`S2aOD~Z56z(_mMRlYf8S5{W3M+F4A1s9a>~$O6cJX zT1%yu#H+GzU9y!71SV@dY(C~H84!RnODhmuJ|HGQ#lgY}fAF{l`Dw;#*i0{|bH26S zz6v^Zn3NlbEUEgGhnfNrDGr8=FO2Bl06K`+b(((hH!GIYE(a`+8&1ws`}NhFMcX>g zSgYH+t5u)Qfdn-Z7>#wkxcS4u7$8I=W3Og9cN&S1D4Ov53ORW>GCcneA?V>ytjh!M zyZ8QI_Ztwxfn)VIQNSQ4>|F@6e!mhXMhWHeai7$M$=%`k;qJ)gX*p4eh7iWq+>+2;gfHI&KSQ(M%(`A|WBzHGL5&P3N+ois|8mdp8bQO{i_%b$%40dQ^SBN>Sy-SWRPIEJS5hs@AK0pe3 z2n@Csvs6wfe%ClA*~f(02w7K^$0T5L7xyu|AKor*Of@DIw-aH^O`)q~0wSbVab%eNeMD-9aVgaLw`2iM1C=Lga z|F9356vHH<5=$%9u_uI|U~ZayZopwSeQ)ccdQ0}=SMAe3bM@!FIo`%JD^Vfbt&bZT=l71{Lo^eJfsVt;bJL3{u)_ae$Dg26Wl^9+n?pUA zLrBb1S(*u$<#Ob@OUV@eIlPiWely#6lN-y8d%E+tI@Da;Ih6w)`YruO?=<0I^KfbN z@ZCj$+TKuep!g(;v2PzwGgIUBP&L-1`|d_a!qwKS|rXKO*JuY z#2-qh4wOxU18Vkvx%LayN2+00T$`3eArc>8T`buaPT8%+tS~LQPFiKEdi#6VV_Jh3 z+kO8T1+c5jXEBpa^C2kth4azx<+{G7Djf*Ecjsla|#+fCS~k|%sWjDPxl zd7`@ZU7}h?HsAqRMAF-8<+=M;nMF*7;xm8zQ;+~Z(8il+U-Q%266ZIE?m>ysQshs+ z72mX?{}?YDNTCK;QO~AO*eh-zz)Es!)93FT2*rG6E z{@b^zU4kZ$PWCP?w*L4|ri!GA{Cx8dH_UsAj2;X$C#bkSV9}f@SjAOmQ_?9;DkvS% zy{|NGA?`$k^c$m3q0e*5%~9cs<}z^3zy9_c#Jrk_4xeC4TIhpGwdj`rJZApmQS}G! ziIUs1kedl3eUog~@49M3;PJ%JaEfW=>0Xm0)zq3A6hn3zdvuPl9%cTr#(gLUkz*7q zv??Yi{f;9ZOL=iZLIN}(@a*C!NrmCun2%WbN#?a~V@JSiPM zUzsKKHtb%RDjXN^e=@fAeqwE^iM{r}ayPT+5!XDc?9DFHJTsu3=H!X(gVX>(ox6Io z;M$|yjwx%hdr$(P1%VC-8H9-AWTpLB%;zt7!|HA-X<#SEWqJDrB`+%0pbAX)e3(7RK6NOqc3ZR#!qXFvCd&1^g+yj~u32BIPr?vxc&7oDKIS z|176gT|GjKB0H61`pdQvo^_ zQhDZQlU8Q^#SP`><0q?2tKz1=C*IPx(}g#Z$K0ciQ&};^FppV_J!(4zZ)q``__n|U z(?+A%@bCAL5cOzF2nbK2qta=>JFvA$npaFi(|b$?JosiQ7_CZTsH=WZPhU%L@$lM< z*km)(iBMhQ@$d?;XS6~gDW0UHXV^3qDm$l=^2=wl_Kf_>KGS&nZd(F;m}eU8QrW6^{6X|p_rs4u8-#2G&; z@+~dF?nmnagDX=5GySdqwyhbPgo>t}T)a{Tcn3Eg zJE)E@Xn70IS=U-qN&=Nxfv^M4@%AQw$e7ed69AP~>Y9m{|DY+q?j+O<{TwVqb1){lBk^JCGp+|UhZi%^cwYhXF zz14?s-`o`e0>byHc(+<16@T$TW9dyqL4r%4Cw*wvQelQiCj}WB_l@54!12zVlv&&jAX8x2x59M`=P3@*IT?aQN&_s-`ZN^{^scsw96@rwrgX<9KL! z;V$`nj4#&|_RzxPta=WK@_^xYJiW!pI%!93&oPPSy*mm`b#icw7Nm>H;k436o1jft zbeXe+^}M24IQvk$4dLO}4dOhC0x-pG-iw3i6bA(xTXAwkw9>Rw2s(QifX$Z%2syQyed6wYfW4oCO3pvwau!( zr!uI6qwxL3%lyaSFnqZg27vJff&hFaUqS}_Y2Tha{c0PLJUKQqf8>5Skami=3!w@pWZ`(zU#Rq{C&STRqE&# z*QOOqEsPGc={I;vjA@{8FXSudKC>_o5)c#KlhR5j;Ut4cocj{ol3HTxd6j}Oy_g%N_IQo(1HCg=i>NTsL*$ov*~s)W+RO>Y~qgwJ^i-I9KqM+$6+WV><22k%(< z-N8bkm`>@Nw7HXF3cfA^7wNWX=z(h$pJs#uhHV)bH1n~yyUTrvu-7jBKGBtd~5`tn)t4fKh$QK%g(*T;<|` znyp3U!~+&a6cM$c1cVmlFrX6aR*q1qN{x9HUT)S0y~s1_UYnv%dsNF_dsuZ()Wx*; zQ8hRNb!Fqvf+c^Xb#H3Aq&f|jiH-uGJN1U41nfUW^BRAz}9NV6^0)@!)Uq=B9+sMqIrkjLcv$zc2 z*pfKsDNsGa-hRCTbl|s zq0G&HE_}s)Z!XsyEl-;qWicBIKV>i5+9_WGhTfP{2oLf)E65Gy zNPUf7c!s;AKEq(-I>Z$4MScL66?KDfu%V`S`FyAC1UMt_MX zXDvv%Qke6042O@vqS7tD)8%r%wRh!BCnN5c->S>cH<0DWt;a{0a!juJf@9F;YWv>B zJeqFM`^Vw$DINM#-SA?LZm()+k9q?;hoGSr4slp0`dj3FDC09=l%O{TTh-GKB0{7n zdEA27)-(Z=OKv4=We4XAVJHIx z6mcyO3Eqmu$!Be2$wd7=QgJIC}2)CXzh-E z_*hSufWgDU4$e>dAZ_c}B>h**K^gx-&i%0Di5MXjF?YD-@6XW@HjdxH^`(agE_KY> zI;r{YOg9}}dj7cW*-la`I3Fyl<37KC&OwSAw)k-}V->JE9WV8ZJBaTmusD892^5$FNyUFFkkU5*9X`|& z=d5SwDU3u+mJY5(8Krbm{L}~g?{DURa2GLin zft5dwzYmMpaO~Q%fi7rNR0Krnn;Ilm=`pJwUT)JVg+bWt&aX7W)JkxF30z z#0?6Rj#u8RD~ql41f3L%Ll1XG#wYzRo)~%;h-jIar3jX8VpVZ$jP;xTZ;WpcfEdFR zU&3h^6G?;rMruJHs%g?Zdzf^n*qwQb#O$O*Q--Q3s;mu5VeaPXD`%-BBRfOGxm4^) zJ{8)gR3tsJq09w=#r8bx~PLB(x$=OY6oF*_DZ=5Fhej&dyJ$b#7 z`Ni@JX!UR!w;*0Q_8#>s4Cd`Qd*Ic6F~`R%VikC>!p$esVFHI~FPS1e=sDny@*Cov zyaf>9jiqe31C;bih{HE@nvloO?T+H?(SWRMctLrnyMF{V@$b7BFQT9Rs#n4VWu zt`u*7yO;aPP_x#tBM~J#0R_c%qSsmE!f!=k@;MNdrjc%e_}YsFv-XS2v`iBV!)^!H zH{2=Xl@bND_pV^r6<0fK*jOGvu&oAt5KR}aS`@g{sDjcfD-kwf|IbSF^%ZE^laWY9 z$&*1kAMGxXUhqfB$t%3OW&kmjIhk%hiVuQu>PKZ6?#@{qsH){hha>uX7UZa?00J}E zfPJD-{w*lZV<@2Tj`blu;8m=Kes;|UM)l5?d7`98H4T^vTB9+la81^Aifo_lCo9*) zEIpb##$8Fw_Xp=0z1x;Mb`ra#IDh?~7&pRH>im%TDLd98G2w05a_(9m6kC35arV#3 z7r>6?jP}!Fs{=D8|L$hl?IJ78RP)+M&1~tj#pgz!r3_v1ocs~#fC%g}*Um(4r)oMU zsxGY+dC0N&STp`*yGg(Q`C4aL{60%FT27uHL|+E_mT2Kd*vq-`(X!!vis7HZYj?f5 zgD$Ie%8%mG)jBHQnI{v5lHBPYA?nlC#-4yYCB|qE7LHeu8~iWRA@{vJ25~&!?KoSs z7<;N)R1#;U_wt5Z(K}{%j|kC^0GopBqB~)4fo0L2IvmG?1>PLq?UX7;>~UACG~)GYRgkA@mh^5vvceQ?UG!(t zsQ1jDR(c0*1@9NdU+zD^>>LGHIO{Sis~Pa}(kLq!d{Uc=l?QrH3ID6^h-ia1y3dlR zxZfBHJP}*Fd>#`K`gA9PvDmxK&tx+#R!!@Vz+R6Z_xc0+P5t7dOjaSr)8EfMF&9SD zpDazGwYwt0Yl~YAzJGG+9B&?FnrzC7Z1Op^{mxt_Za*HwT#B_XPLl@M+`d3r`Ey4x zkM3kK;C_8+vR)&{VtA_S#0d|Vv19DJ?#Nsvq~!dMaF};5A>kSOU8^AE8dZUlFT&=d9LPP}wDQOv^#m8BnuAd6H-c|K!5N9&T?Xu&QGF@< zx*CNXC<1Ut+$26*boLtQ5Xa>lwSy6zB?Wv}r6W4@OE}tEL6=o6T$EOx^*c05*qKp8!#^jM}2vIJJd0pHG))OOo5Z4P$Y^O;Rh0%y{Zoael}|7 z<>c^D(iH(Hn(F8cLt>T6mz6K!FF;<<21<@^V!$!oB^R*W5r0zSDYX4~zSbSQOMN-~ zIWiOD@N52(JM(gf$)7|0E10<0_*6v_r|Ko5ZR z+M5CYzN7g?-CL%PKG5wYbb_fS;{n$7M3YQA?>!+IAKnf2TKT`M(phPA4r3C zrx#meX3$3GWj`<XPZ3wbwZJbiSYtY zdj4R+yMKn#4XqW#(hUK{!WUB z%bIxIDMEbjRb2*s5@afZ2HDgK1#>Bwd$e;`Z^P};+7jd>aMIo(6JZ)J=3k0YY|6AquGeBBbt5+7ridP_Zq zrI|DJ)wI*uNV_?2oaOSKHEN1srN>Ih)Z2p4o++7MjR`}WMr5eNm7#T3u+1T33 zu|5uG2l1{6eR!%Hs5hIvlG%2#SKr=cvg3_Sn;%R>1fko~O z00oGqJ@T5FDFGDCEA{`veyXr9a1IaH(C&i!G4m%iHK=4U)KIN( zdzJe9%g6TBS*w#C9qBH0IkjYYE~-yzg?(K}N_DOcQ1~eg9-1)QQ1<`rv`(T@Ei?w! zvUk47XRFKqp7}PDP*PhxwOrcu`YP@6$SCoEs8Gs>7}?2v$UJ-ZF^gwcOVl-738Jlw zMW*(HeSFKkX`P=znDN0@H0(^B?Rmgb(QU0oar z@}z1=sk+0ciZ*3bRU+25x7!6{6ttn%=7Iq1kOPI;e*{Dgy4xgYfAVrRF?cn?X=YpU zJ{aW!2;glQ^f4lpJvt9GIT0GC;&m}Iyv}DI>dyz>J+d$1Ef!%giW|2FXb+3#QwCGV zW*&K(A9@5cZVoRx3=Y+ZssgkJ{Ch@ePEDf|Pip?B%SB(!O>`5yv?hbRgsYKAVlVl6 z@N0_@upeEM21u@4=ZjV^v|&)(h`eaMA{ORe8v0ww*VNEu(E=dfzsoI5&>ipY5G)Z_ z=!ypIbnz+~lUlzmhE8koACF3Y`j=HzNqyrttfOnNYgOiZqv2cykd^B`E+ANPQ*M71 z5Om=g3*f~yQWrx;C((Q!=X`GQ%|@KffDphEgo+5c?>a_N@!c8<%k_{jiMEaiPmY!Y z7(?3QWB_~;LG*C2%jAU}yWr+WLoWi-eHM?>3OIc`*=OBEP3SF06pa*oyu;<$|M3Dq zcoe#?5d#^**CZ-$?$;(h6P;n1Ww03);?Ab{mL~|`}v9A{EQNE z+a81)tG-YXzYNLCj!pii!lIUNpXn2+NW2JUn}P}RD^|x!Le%*=2>W6mxaZiIk5?XG z>72iLBZ#`Uq~$@Ci7PPMX#w0fjlJx&gB#APWmk617rB+W$SLa$pwWYhpFkmRd(3jO zO&FA5f#zif{sBO_Y2I03{srw&!Ku6y;JtvUUCWEMU-DVvVYVSr(X)@>`cw$?v2GCY zhuH3PG;7N6SXY1G!T!?KqW`9cex8z7pL%44 zWDCFB#aFd{UU>5^<6&3eshSf zO5CPV&Z$A=v>=I^KfgBEK-1MADU_#)V9U=yfK3dTx}RhIbvGBQQ-G!nCkZ{9LU@oL-v=r9?gb56y|#dM`^!nBgGFL4ZXAh;XRsL?UKae3_?mcPX=Z?~8ZqkAhgGwi>jYt*wo%JgCuRBPy)INC=@} z$^PqzcZ{pDFZRFy$k&hMG(cEu^Z1l(eD#Z=ga!NUw6Y3O&=J+A;gg=BKHJtFXW8e z$UbhKp`fn?Xo=h=g_Ds4hJ*lv8AO5U3n-%+gue50AJ{lTl^9316Qt~Lx^7FRBiN7` z=`@V`ZUN4XJH4Usa6Wmj?@*Y$*DIV|g0KzeM{% zat1`#D;nVM+UcG&Dum&bLCH2C=VD~7!4`llZ1mn?k@dTN(76ZL8VKRM zXWLmR{^j_+I zz_Q8{Du~%P0H6Zx2Pj|LKn$OM?b|%GqBAU+&EO%RR>(zid$B7#O$DM^c@;=QES^S_ zbKh!x=qk7GfFH#(Ar=@*#U!1BA;QJ z=Qye$|3KHG@LR`>g2X%Yq9g&B|#c zPm4}Y(FZ}J2D(4kVR%Cx$feccFW$#T7j5gOXP`x2zHaFvTar;QO(dyPp!H-#RUs|x zw#xTAqOPS15>M&3LyAa4hCbuuynOJLH9ozAVqfkBeNt$p=ff8?1WG#2F2u2v&}*)y zGd0t`&Q;!rKV9aI0xvM_TeFzjP5<&%;LC$C-hXVxr>;nMmsxj}oS0i0v`|j#Wp^G} z)&8(z-~cUD?Ag)&54Jg#f`s6yxxVBv9a-Ec^`6?*&;@i=YwAFzFP2P9wU|RWq}DZ! z7D$2tj{kaSFS>7reTRcCI~jEMx^Hd3Gb9{I{t0ou?c{+sP z(NNJ-zK*crF(K%qd6N?je5Ig^N(}pSI%JF0Y1>`bmc7PPIs)i>?BaH2VhL0uM7Y zO`%`{jd5Re1nZ%3yJPBgQ4lve1 zLWv^tdunKGtouTJA*6mT5-xcIUS3%7z)QhBLf$Pxi3K#Z-t2zSvEVU+i;;-2)GsCH zX+6IM^|&SH*2~M!TXakP-@nXCWNj7 zta}gFL$JSj>e*M#w6#vbZ1Au$Epe{O?sgtcmv&L#f}8?SSqeQUslb=;FZZo@u4pwj z4H!aSca?%Z$@Oo0)(csxP=aXRFC9a2HaqKWG~Lvv#j+JFwTBJndN4Q6<`os3 z{4@(Z<1SvZAsBGHcZJK@Mv!w`h%0ymmTMoatsR)lxVmOXz3GQ9erZ^FwV<+;oZcHN zOpd=`2oO|Ptfu9lJkwuXaOctJp3f{+&#inpp`Zqvz{$ViJm82S~8X+=qZN_bD&9Wy3?H3fTPN~ApHBvA& z9!^b7o)*c8=*TB;MMs@SR5TvLIDxMdStkDXk?Z`Q#?UICwgOjGhO=sRGR8*bABazerk>Ee*gO z@#JNU+SVC@2c9NZfZm%P@D)TLWRGmh%cc$VloiD8zwHd=;>xQ~x4->HULiuDL{}Lo zs=$8m`csO~2YU!=sfAoJ@E~kTu zbwMPQb}vW&xUYMa2>W8cTyMsHDhUDty1;sG_Fq#Gzg_>hwr^+A`eD8Nx2E82Biyf1 zP3*RCtYDemdp8OmF4Rcg$mUoTQ$>MpNt5}8-3|gWIJ#qpfsae_hV{o-21XcOl$3CN z6>%fg12SC67dZKw@trY^d&toAYj_?Spi8OAg2FQ-(R1sG?TIzL&A|$VNhCt`?*fO3 z64??}D1X^iG?jCpw2k>7X@d{%D}E2GzZ;<0-0_7R?opsN!x;vkRn)Fv%G%=JMW4yw zLvMDcVl6r%xSDu1-1wQgY^Axyn3tYenmW+4^JZ$mV5CN&@v6KLXvQiI&cFCW-F`Ww zYUom{4cHL4nz1uj{YzW|Xrlr}0A-b(Jp~EYw_&r>*;QIph#^fx5?FfYP!qzV*c=N2 zTKmE%?N+HEj>YPsXZ7j)mD@kF!o@%Zkss`lr_;SXxz>yr$YKI`O#gO+Hc*Br7nz=AG3=u?ri z0Ry;Y+F&L7>$qmjTb#J3tTrh)1YGQLIG#l^;1f!4CUwSFdz<`Hizi{((|0GcZ*#1r zR&U?NtBQg5@P-8t0MHAfEsvY{@7W3x+JVj?CBr3-S(LQ=#nY#a&pBsLNQM{nIenQ# zQW4Hh_7Cnzpvx=M^y8$*nu8J~+o2^DyOcW~$uw?Cjm*tB`gDtsyJWy`fa2i*!#vs` z$y&c2>wP9t?FmuN*`OqXgQEQk9uFZ2uMUqUS{V1E429f)*8otB%GpT8xo*6}f2!*} z?|CwOe%W4*$}1Ix*>m8AX&hwN#XLDmF7E#7Wa`Q;QpyfesOyZdQw}ay+>0)GBPlz^ z>WdGPEto#j((!wGX|+wjrRISJ){9G^vQ}cVal=}t0TbUJkF`#07A}<*EgqZ@DaBe7 zfE4ajTLaQ@)2LxgGS;I6Fyu=;{?}nP98ie7Td!m9+0PRuq%JeFO$zj3Az4Yz+dq~) zprX#H2@+8KI@k)@xiNp3h3>vwX^p>JdT{$%VUexL6?vjjey4#1Qs;mA*L6wteqQ5M zwz!5bU~qMyVbhessJe_J6r6&6PQuxYRB96@?#s%CvnYz~sc!9%P~hM2DI|1n_?7Kz z$Jl8U=FfDGxw&T1&%wnF0$Eo1t#9I3=a2%WMEp?0!4C~L+R<%IqDpy>bJNmq(c{2W zyL~wec^Zuu8lT5K zg`?<;4ys^np0=Wyvw;A%mk2dOm*?h#ntFEBa??(Z)2zBe| z07qGFZ%r!jPOCV9BOv%Unu^e3C&9Pp9N2l=&W2RWmrodrYu~Pl3xKF!9Re)7TB-tv zpeUtS1{7m?rh3NWaSOWI!}kuRW=3Xdz`FP(*mcb2R! zUvAxxfUwz+Y|*nCVztmX0OP@s$(0OBRYRNf<=0!wzEIX%w}cmGX6TK~GE!CX3@mKE z<(@r^-}7GBxB0Yt2XkPTrX zB}TvB7v4^85k?@f-z_4#ZoEL=NWT0El1vEK^YXam#xpshG=vD*Zz|uQ(JbVkt(|@?c`cvu>{Y+_c_^v&CoMBA zS`hW>{{`LU>&xzLC)30lky7;nks?sIbDR_l#s5aLN`tE1JrKNJ)Gqz(<=J3hckLt3 zbS$TqkSKVRk8Oc@Ow*?=R*|LzHl#*!lbXd3g2f-M7NF}fEpjE66aaG~m;*EPo_X<6 ziw=Z+Zq6y&xXleEpnggI;`0QYbRL_G`y{RJr@v}^SP3Xc0FtYgkQz!ZdhIh#`wBtV?h z$#YL(qv7peCWb&wh5e8i&p-@sRc7N<4R=0R4D_BpsU#vFt^Zkjl;<$w_w~s-<2J(XDvjBypj0` zYv6!a&-o_D<)u(RzGoMAtKgA;-q2mcgaM?v& zs++1M3P2OwSrBVl{{I2t+QTnWz7R}1B?#rF-(J$RytOnA4!|xx1m*^?2|gA0mFTzG zb^2wmgpTM@z@Olgj>(IHGdb&T8mH|Su3P!h;LhJ>B;EK#V~>U!#3hK^MqsCffwvlk zw{>BJIgG#Ta#weO+KJ3L>2k5_St>J{_E~pb$DLLt3FkAjDD$jklsO2Y^SD)FOQLen zg=d$6fWWwUI~8$xSM}m5h%tFCtj*6F?L`$7#2+Abqun52c&9R~su+hIVnIsF6L*3mdqkOE}shV7}@ zOhVyw<>@<;H?mMd-a!KG3&A%tS9fF0TkoWwoVz;!drkLzaLS4yKfvm!AFu7g$auVv zW&rFh&ZH`lfiuy_{D}2|7f`%=Z^XzI)XGbiNxFtRQA#}Ug4s{13?+p}l0aY-j70UY z&ur}B7Qm-=Yg(CH7=S$Ba`<`BEpz;3)@r8k>V9h^;IQr`G(Kb!WMY~uXE@>j3Xaz3 zSWN+_{iCpgYY*vH<>U96_uij;?IfPxnHnZk5N#RZe*UlFg}I+OLvcSaK_ou%G+`LM z*J$nwE`Ps_`QbCHF>wM6f=*04N_BFTo&mO(|FN*m8WFnEP2uk^K1u>F9#IY;oE{EI zLN8ks7_am7dtZKKf`i(RgNH3#^0o@%=Nqc6Ew*A`>(3>b&I-0LkJHFg210|ui&#!B zf>&G-oXuQYmRVRcDmgH#|K`Le3f*?zI$;PFJ055bTD`k<4mLH_dtXdp_O)t$sOid> zLF>_X<(c7BXxZ1xi+n47%6dyrC?APOb0sh6Kh~xr0z=cJY*#V=pIPto)kbFW&*MetC$+K$nbzDqyW5#MnIKN1*7S;nz_)cKap?L^WEDz?n!GuVg-qr-L+s`iB%0R z@(H$mopAE1lbE#?1YfAgsTLc2XTiG?bgQA@boiG1OE{U`ygVQ6w}*dwS~|Hh^K7}n zqj9$aG(@gSK&@d)V!u5T-dD@O|KFk~3#K zL!wyH3qq0w!jC6UNfO_4*B3E=vMDQ%=Cz@gQA8JyManK55aq4inD@g-UXBsshawxl zX{=bCcWzy7&->k#%tzZiBY|-N3yeZU`0$q1nNE<;y`TFxGC5D*pHN^HC4=`P+fY^p zjZ(|Iw7}j0Y*Ca31L_K3b`{XVUlrT$R!%*=^|H$TG<;|oxK7lk=!ATdNJnHHVF9;E z0d_w_JoDB~V~V86x@OOlCc#zwc&Q69-qBq)qWFNN)_CUTp1af69*@9rJ69QRl0_d_ z3Q@?DMK6w~bWd73K^n>JKjJ1$UVQV|AzpZX$0|L(?dX{o&|fsXoLGb*F;~>Dd08K! zg;1cAOR#`MWyt}m4N0&-q@gB?inFyeh#?6=%6Oa#a>!~qy!T`>ig+|M0+k9=<4JdD z<)?upKAL_fiWP$nKA0>J>8&Xk&l{{s#YsiVi~*D`m`O8PZf5zTL|x0}-W5d90!SXA zMPTvlTjb15Fnlpz(Q-UIccKhteLY58o`ZmzH~x=}14EN`+-k$_b`~EHR;?L7)|z0u zA=L-QrC_JcGzz=OtEK>}kdKc5f76vr{=A^M=@B6re@K`dB?O<7^jafZdGb+UC(|Ed z|J9D83x8RL!ZV|-py55HppIl=va(76nLhRkA-hvpH{9Gkz`{QNFdW}ARmHRIxJtN@!mX=ylbuS40D4u(a3*L z$0~oh|KPV-ycElVL2A$DzEY{q#OfWMA@pj|5E$~}jMK5QY&pC?T??2tKE3QitBaS! zlFLk=Rp-yd<9m2L*qeo&hJZeYjqKN6%@HL*Wsvs6l!ho@!@*y{n;vk`s=lqI@e&VP zpmCX_vp{P0TBv5j?p-nTDJ-a|xvJb=u6B+)we5S<^QYxfvv$>hdpU{d&8UU2?HPj` z*b}gFe2)G>Gnxwc_e`NvYN{b1<3w-v)#3+Y1{CWa)ouBnzf3fu)NBwnisJ^-_!{tt zW95~rj)y$c)=}3oP{vGdn@z4gcB!pnOhd&1GcoWf07YfO!Ksed9xIe%m&**IHrEJp zak00P)+-EHX|v&(Z1V>q!QsQh{l!6>qrr!cmpWNJYSHw9x4vecC4zUYc7O02W%5~w zZ9fqd|BW^uk=>aO9E`s}1>aiY3FjN%0xymwB?3D^;-@c{yaPxv?=38m6&j#@!dd=1 zU5yJgQM@AY7r2wUUsA_(=f@LgvL%oK!i)s_nqUfi?~;`muR9X)2_^gp-s@Tj8ikNh zP971E#uhUg-7uG`S7{57Z=)Y>cL{|%K9j1u>s?RS?m3PFD6P~E>tYns?SP^sq42qn zZ978;5Aa{7tJ8vnnlO9|Xkj56U%`q%as(C#D<$A~02~#S-s_frFY$@R4ccsFT_BMB zNZ{lXvY~eCEd?4KIy$P0_pOsDBsTp{RSXN~Fa7z0kHJ3c0;RFefH48s9ER{I@dbz} zz}CFg#@8knKhqFhpw%RcYkxs{QjXlX3QnZ=#6+s zEA@-}dx43x7POfAW!l8I0F45as;+D>HEtYCDVq9&)<$XZ{{vJHZmgAUzIg5+91@~8 zq|~Vm@42y@ykLJ)CU%b7mKC&mnRynB;VOTdmsO~^*64eVEau=QNEwi462$BsU1|iF z{M>QP4CwX`TBQr#sy;g7e6#r~ux%3l0YQeRe%!Y> zN4#a}^XFFvYbN&sGkz467p>e-&BO|`B2#28wo8zHjoDhk_)s?#^?$qonKZSujRIxt z?L6`)ENlVE+wlo3?RY2eBIY#QCEAXY@yX;_KwedL>y3rV!{K5hV-D=t6;%jYc@{8G zmwtx~G>3&3n}i01fszb%v-Y5>5^=7-uOM!J6K;(Lq8oP&61*T)7amrXXhF$#(zJDU zQhxE16rA(XQ~+xLv<Y&?Zd0-UZq*u=>0*cTmVwwY-ZbxR`QE$yv;y$m36UbucLFWMt6e+ZqTN_! zOA8o3rkw8PnqybB~jRa*>g5JlJWyKu3nIuXs67=hD*BL99=){ST?EWc|VX zu_DvkHm^RDV-jM|f#aQZ4p|9q;~ly9f%q0ouPo+bzK8nHzxI>H`rxxYcg&$q+oE*l z^@jmvTR|Q&?39$!BTT`V#o)uUb`)Q~&{iW;z#+$nUmDEe!Ko*wlc!S=@$;eJSyg?B zW`%-^SMB*l?(gDn7+gU^AuX9NZTA}D{FAPwR-5L+*Q~NbI0y_rE&pJS&0!!US_R_;>f5?|?|n-SfO@vFz3SvT270hXKsN-@=&fN#LQ?hx zW&!^E_x@i-q)z+4?J%I5N;Lmhu}b3JrN$*bvVRgcSP4qjS>T`1y~_3f`LCWx7XVT! zGWZ_`1xRELHMRlNC#@?l4p{Q`A|>ZBB{b*1h`9CihyA#IXOAyFI3sd{pzWmoCuS~q zsnZH$_Q7hkE|KRUYv!GFD$fDviAG91KG-3;s<=XU88g5AT=sBj{<1yu*WTF$c*l9b zKlhUtHd$naG@R$CgIZYDU!;t~<;SgK!#1Vi^W@_A^W%)Ii!E7_+xusjF|b3ln`v>f zN~{%k8#TFe4Y7fy{-qyRkys<4cJ5_uBK(mcxA-Uy#!_J6V9AE3n=Nr@4NXio| z?5MNlEy>;TvenJ#Ks=C8S4h;4w>@{4v(;RAjvQy8ecE1nUV|tCvZRb3*E8=fsDM2m zdU4{geD$%^F3JO?Jm z9JHLl#F0L>`heg#!Lgulz-zz@`EPZ3O`1B>`Cau>mrm|hP21PAkB!FLs{h1(3=mQ2mIRi$$DzXyUMy}#%n0k-MzBV zwUR$Xp}c_KYRxG0GBi3A{pWP(H=Bj$B0}h{#z(m31^A`mu0jEg%@|eDSMHk%{%H0vJeU^xp7ZQ}m&Dcg zV-B^o;PJ<)yBaluE&{DZtpCRF5I;SU0tBlV{BgbluvkVyd zaQW^MTVP`}k;l?m#;ky>+Ab*ht3x-5uYfwwI9wp_GQF z2`8sb!T*el&|%S|er;RAL^sU^`2^={%=hw-i`L68A6ikqdGTE*zQ3Nqa%nQJeOtEW z58wGOe^8CRcJtMyBFv)B@1Yg@$~lq4lxi}&Kf9-{p;15HS}Y}R;6dN6Xbwd`Q4%b@ z0f|yds8ItjTOMF$A&WBevee`Z(Ej2LsiFYAtMe*Yr-=d_XuW})@UT3v@JKSM$^~1@ z+jqPJx6|5=FlH77wR+f^0F0UHwYR}-UR*qN?bb1`P)~gVD#at#QILdOC-MqJ7|_gD zDJTcvtjV$QNe0=$KLQ7K;Bn6sjz0rB$Y7=3Rn;CYv+`WSwmc56hBCnlwuwhVqm?Fe z(^jP{c~_Z6mBF5P>9 z9AxyX_^o!@-Mz~XE@;X}?7p!ES~Ci&p6;ZFaHoW-Q&axjieZ<_Ckz}2;GwgM&a}*Q z!w1jT+c|L22Wgdfxl0P&BBH>s4GIi?eLsLv&==& zcM{QZ{J&lJjJUao#0jas+f!q%tP>xNGj(!fNm`M#z zp?odZ2TdJr{ciUs!FkAmMx;1m+1 zkgHQ)ViBzNC}UWZc3(t6i>x1UH`xO2}YiB9#2_CEFYKFogiF|O8$JeL1k-^3`vtQ@y~ z9n0Ge;SGHYXEeXMRw1uQb`vF`o?V63KZ2R$xv}1{%?G2+AHh@X5Uov{IT$A}wX$%h zd(v83E;{9BX5iP{K~k4(ip(}eW$W9~m|D3O7C>oZbH3wXuL!N&O4z_1VCiXE0}s-9 zW{S>EZJDFrYM)h$0zyVP?l4miHPwDils%x>8-QCJrj5B)v@Z?tOTb zmIEFqF)UxM)7L{@0Y0U`BHmju@jemzuwLKwk~*H@xZIu%O122E3Qf@B_~Wc!Q#5DI zZJKK+NbfTGV2be zVjr^9Nitby(SKk059_p_8ex7?u{Jc*WJRJ}oA9!j$yb~HfRY0P8L`|Ss18PZlpq0u__C$br zV4vfj;+F(Gu({Ii>&@nXi_G(gl(sdZWUdE!Hnt7RpaZySnl_{5B3Qfb`qI@|thU%$ z+SSwob#+ZnoefGq-gsAaP$#HZE2x-X?#3ycDj24cJMdVz?~Tf|1I`U&=RcyxOV2--2eRenZsG@H7BKrqOg5_Er_x|E=X&E-_J_$5|AW5|6%?)(U8mef ziG26x)%c&YUX;!9i{bPqiKx-|k|U0L2oG{6Y3`4aN#vAK=1Tt`roIFk>h=HMR+%hi z2!kR9Gg%uIk~RCjlqFjegR&%~A{2wjHjFL%e#<_xZ#60mS(1HgZq`hcefNK+zW4n8 z$Gzut9G{u@`g*_K@8|hE&s?sGF2hc`O3KB`(6Ahl(xKpP4Mc7?qgqARQtRe*`|X?j z|6VCtyld8Y*T8RFCCyy6ni`w6yl*yYUYEYvc$ccIBkjwkOMEw3uNbBx=;`RDa{n2tubzrlD7ES71GjpR;c$|A=H$&V@ zZ4SWLo>xe)xm7Z~mu$O5ZxzR=!I4_|GU4eYxDTA~jM`|V{?EcemYS(6R5z6VjW4kQ z3tl%MFTS+)?(V&S>YQ8?k_*_r)sNs&0Un^iXvMjsQLu(G^9J;n!2{7ud-t0nN0W^~ z%0O(3mID(3LKH!ZxWHM|jqoNIlwEk1*%;*h{qV9AJlMzD@XCc_Jfk84C@$ZQD-ne7 z?vXoOJVeiuQDRKRbsVF{*AEc;?b4OjC9<=8Bpl;seP4E@uqI~=*w>=4mSym`_5EVf zZC?FRAex4mrde*k%gIa6H2Z0Ij^%bEy=~_mQTh$H7Q_5wvYH(*Rr;f=HTH*2np$H6 zM>CnCNBzxFG29r~cdBtj2wWMme=x^%*l}X#nlUgJfFFDJijA5#>%Ji{x0^7R>(z^WBs|Uq4~;I2 z%6^WBJl==!=J9kFW#H6X!pBm(MZ!M(#it^*A{{HOl zjwb}0ksQkP)~SpqI}sd{N$$5e*zl>?`rGso#)bONZ8ZOp^ra1Rh*nXN6f|+1E0gm7 zvEAl8#DBGIJ^`h%`{TIG$nKBG^}#o&zTKRSK?Vbu0Zaq9j*$uomu#}T4MeKHu!tp~ z3wY021_c)IM&eCY_l`c-%3M_oJ5}m&s4Tj7stk2im%&PsV!UZII!oy?tou@Jrwb=Y zbTeMk&*i}3kvKh=&E4aouXUKcaWLMRk-Dv|077E%DfeGb?pNHb&b)GiG&H`OlZX_; z_v8B!m`gmWeJ(T_Z8ZpQTZ;q_%M50*NzdW2dUmliN*rQmQu)Xpb(JSDz*O14N$D+d zLHmN_8DGR+zWBr4#78w#c#bS~CkmZ9!b}~BtA9pe^sRg;Q4OU>r*)5J7x;Uw^-Qn9 zG`R+tYwlL(yef<`RtMjTce(WHPMRh?;xtC`!I!}wj+Pngu9FolgpVIrXOI4EMf>cf z?Sm6)M=xt?*#etq5$^L!**+@JQ|(n0?v;B`-f^Nw&CL7K&a}*x zb0dyE$1nhjj=e(h=tSi9gG>X~6$b&A%IumDSa5SVQg5af9Gm6nqp+kjF?9Gbf6vol z{fPkSCUW@R(sK3U>p^Dq(T|d0jL;l?wplv)UY&adqZJpOQy>Bxg;m$2Khv9Oo;Z)- zF@O~?t0RSEz{xH)u7CXYL(n3w{jl@V#fDLHq1NTMmCcpo&4P--w;0wN z7CtI>&$wlPiw6tMZUWvg?bLq>VGgf#QXq5@Id&7LVm-Fwk$Fo*#>nFc@Zr92xy7&{ z$JbBE%G)cj$s4Zxzo%M-nb6jjj}OI&$9e-a>b|wKe*U2a3&=F}c(yp{YW=7bB}DsN zB^V_ngETc&XOvl$KZtM|Njl9O=57DLsV?ZY&v&E@WDhrd_|@LY=QK}Qaq_-r5GBvc zirnN0gER_9DU~CJ$306Fw%%RU>#%`;jG{2g_%atqr`{G|EIL|?Cbv9iL!913kqbMD z1NY0N8=joo$CZBPH`^3V=PDCKUuqC>=5Gl(cAg?Ue~g~ol;1D{E;>Bj(^512_%ps= z0$IMT5ER{Su-X^vaqej3od-llS^gh}xC6*I(Z%NpB)RH0A%P;o0QJ3?&gJ$s ze~_WAM&+L*=W}OkmgD;+#HEguTax?9w~o}0ei>ys)nZ-aItDLk=85T!2raHyW=vHQ zq0es<^(!uxToUDWavSlQJKuZlMwEy9lkNL~{CRB~)`zu4R+8jwsEQ<~W*!Dgtidwv zDwr2NDv=h-VpkG%(_`TAg_)b4W#F|g@|wdKuEh0ANX72I+mgAeF10%w?H}|cgkDDu z2;J}UT4lJ-GN}E>`pijhba};XH3!QG+k3(2C63;fu>PNd<&WF{Jmxq2>17}$gi8<| zq|Y6lw8Vq1J&XbSna&V3h#Jt*Io&sHN$Ai;oy-81ym4*2o#y+_J}>Oni+Q)2_QZ!x zh>5@(7T@z9R7NLf-j<{!b{va-^+B`LammrJH65MU?+z~qN0TyT2=6@`E@cvVmAss;| zor`%s)IEYFB*)xqdHJ|oMy@&cBpH7u_yRcRB))&aeBg59r(j(jSoGWd9=G1U7%S5tG4pVGgZjkm(Kj8E;TSxk#?Irqgfl#bf_@>e&Mv@dTf^Cr3Oj77;X(8iuHT~k$7S2;_i3vq2Im1 z3W@aOju>eN0dDOY!$Na`ukeGbdE*)_zj~WO`p^4~_Fn6}iOlOi?0iDHY+UB{9j6e! zafaQA7=^Z$27mkeD?>2a@+G@w?aQbAS%D|=uCIUVRqFI2xxB?vPSgd3i+xzl(R>JfGKp6EUnm8p|;9^!M(Z-q11V zoSK33`V!hlp)2k1!#M10(uR>+`u4U}l9-8R-Xz|&&?XIgwzuWCl2b5+@p6txrGt-k z7gx~u|NgWjXjR8o_B)La~@(_5&k;-bpGm?3*68WE^Gn+Pv=sHRqm7Yk5Ua-`vYz*5WisF?Jpqq0w>^4bW8 zIOox#3%X7m(I_E=H!bm4*pdj^hXHXSu_Dg%{rc3#qT-)VCX^qY(+tNPIP_NX--Mkk zP?!IQ1|3e`)#jeG!-wcvYFldewEXbU)G`blwr&jvKHrR(9|?DJ^9q66+^*w<@}KI_tHToqT3yx?1NNfw!vxRPqxlJE=@FI>GZf+GkwPm=%sbk3K*8bb$jH&i;4e z^yh_&5~aXb#X8E;KT}OMrO2OjM$p`o=b&@=13l`O*Vy?*_Tv)D_iaJJKwCZeA#RZn zczaAH&xQ*!93c_q!36Op9SV!X#o-5vOJzP$F-cDclMikJomSV6|NYyd31i3}m%d!C zpg)qUDs&vIEs~VLBi+I(rR75rH#N2NT6Q1ppV17@D_^E?{_i0^0#;0ixi+~2gBT`p zdIUfkk$d=xIO$}XhfhC#FQ$Ic=lcizu(OeFEUBzE2D{Ji_NEik330#Zgrhv9BFKV} zWYbL{ps*yZjrH#lY=3`z`A&Rm_kywG)qJTkk%M<{oeixFM`N)E+S}K1nodJ%k6Vuj zkxfZ~K$~viZlBU23)oqSoiW*iQ&Ft##<0jgcCs=~q8c&+JC672fO<51sIVYnkQTtP z(#aKV8-O$EwQTmcb}SgH8<&ajrcnyTb13#iFGwwNHcoiLtpVrU4a9vceczFwz&!KKsMZfhQPHii@W zzmG4Xm=W0u`C72NqGXaAIu<*Np9Nt#2Y!GsQ>$ZuZpq!sbG0$Ja%J?omD*?#R!F}d z3H88Yg`sVB5Q_qF9FRRux(HacX=5-bsXxc_%f-dIS8i^EZ~~TK1ObsQ2Qma&R*o&8 ze#3!rl6DG+kd$KG&IQI!Yx|vP)K!i${_k(%(-qs#k-SJZh56h`!CoognPlv2FvK5D zN$?T|xyS@jKru6aq&~%eI1t4e`lzHzU2uGC)SHqcUSxBV*k6c{)pG>_X5qjDp33^~ zsV`glCp5LRr*`{-J}($Q{sB);?0Ay8nYwACYo%8I2I!P(;*o^Kav{iXsNcu;!s|7# z8nD|NH#gK(wbh|yDwLBIi|SG!TwQN_-_Dzopg_ATPFaDRP-*UCR+KJYS2=HWOwRDM%) zXbO={n>l(l@JH+{44>= z`rK;(DXa%JwZ&nv0lIQ=)C}|hIHZK6YE9Mq+FE+=wf2Xr{y_`F_PcWf@1zb^q;}SW zBEzITDENVARj$<9xGP7O3DyYrK!r1_#v#mz}4gWY=r_ih?6 zo}S}@I*Gsc$5e>*%`pe-xOP;7^SK*9*~vCht!>XyOJ zPpCiu!}k=R2EwLxirJ=7Dm7X3EM1;#xGHTv&w>oMr5%0!Zogk6I@({+^((^u+Rj$% zM&iVsnAQk~*xBX#4$_zeJX-4_IM`!z-^aaJL<3x5F^CALes6xIjEGITb@$;ssa}V- z=it|+x{c*Azhs-@r=Mp|UA(IPWU{!P;JeAEhv-5k>skt;HS6v0X&T7-5>Ah|SZpyc z*==G0)K_oBU{n+%Newe}&g{+Dz`4XzEk{`g@7kJf2dx5&rS;v>dM;6Be6)GLX*(n6 zAaHZO_NY(B=rBC-)ZsUXN0i0Uo`(R2*G(6P(x_9RhkL9S5Bwl;n83SVbcgEU7u|1? zz2G*6(&#T?IzK*;=6-kL&pHRXoYbGb4F0RQzw=``jr(lw=C3Zf%APZCqrE%MfTZ}J z%q{q~B0L40EXM$WbP4lmOOs8hC{jqtLM8kG&px`RKf#NC&Vk=%o20>Y5m^WSu5617 zlsLMKnA-e$8FbKM)Z%^cz4dW$f!$V(OnKn=r@f=O2q5}{vcZ=v`9b~3`lAKsh6ngZ->?$-XU(8@udkd5m2hoVT9cKWwo7c5G>CVy|$_i%#3{q4O~6 z7@e9x`quA9WkLHHqIQj_S?E^a;(Ca^kJnFI%l6cv|Mmj3?Kb+AeM|0{v|@;Bnls86 zG*?Gh<4{nDH1`Vu8RXj6^$)kf@T#;H42#p;5-Rfam3=g@C=AKvnII*Oh7=6u1IV^x z1PQ*8d$Mo8-m{3HESO3VzFW#FZe8asSKjPiidFplk+AdH0(Dh=C7}0Udc(Q(l~_mp zXS{e0D0H|A8)C56Br;qRnaTv`BMjBWW9cCxfb*TB)?*``ziB{wNLD^-eeyn#JaIOQ z5^MDHc$)FsYWD%c(<-_m+y&#=yMCg#nV?&3wfGhr+-k z3mJ{W&Vm=R1S0dJ1jjECPD_ev*WEb1aL2W*MoIO#XJxyg(&4Uy5&g8+%HAlhD$6gW zpS%n9Nt%=i%wI7Bh?`EZ29=zN#FEQLSKufT=ed3BJ3yok&N4Wtih9VsqCeb0{4Gt; zO4boVYDD>(_e!A{bwb}q;<&pVIVd_}q8^9GnddW0S0_x28ypP`O{$De`A`<6T_VB!2%5ue z9JWS>@P-yd379xtqQprH0mT@hl4t>RVL^l3V&vpZQ$^go(#~nn3$rV)e#+X$$$Cbt z(+BWtlxYJ7Bmu~b9#S^gD-S6wF(F?1;5C+>5Gtkv$dE4HH9*%4MLmHN+5xhV-u$jAp+C|Ik5wHwo#OphuN-f9t!CvHEh z?4ME~lA9q$&A%UyU*$dCh=Tf#5)xb(q6Ki^w2&(E9#*k{hh~i}x7ajt*`$LRPQ!I! z(b>pcHWCewulDIt5mknqsN-=61+aENxIWW#S}@rKo%1p2Ugb-y-t225rH%#9gRiko zzeR&)E^aa|o+u1pz(BxHEnqm3diAP!Uak82TMn^Q#e@MQ*Fk5;s2ooidw)A`<*+S? zqSz$eJ49+#ezfvc2dtscmv*Z#F<1<6{q!kC$5fYwn`d1xdrR2JcT=47Zu>oTob8;g z|7xX!k!Pjy(PFtx*Lus~fJR+5vvIx~xF3cC2UbYCWd=50Q-{XDO9pHzaTV87`x&Vo z(}$_hQv*|+5gMHf?(v|UwqjJkGZAM~D_%lm63lI7IwYU`%PxiSS!kJk6vWH)TsmR# z-P;qUAOf+3IFMpc5P=hnq|b?Qpr25wsZa;$)rF*X7rk z*8dhABh(k)&loQV)d=RmxiO0s?N#4>7}C$t50)}fLTE#mBVp<;|=^VJvr{)6!%zdn@7OKT-iE6WQ7O3_zL-c3WSTJpf>*(=X7}r@-RN6qT@sj$wnU43H5fX%%9MVG zIka|4DhKVJQGT>b)w4uVbY3R~Cm1gnnXk_lq3N>Sy}tda0(CifUp=G13@y8%Cg8rQt>>AW5Smn-ey3fa$r(5ZbrvaWp+?jv(vAW+0YxqRPZQ@Y0k=O&I5IRX zd{L8s`0nh=cLAkK%}a|{TQIUz{a9+A7cyUpD10H%PZMF|sip9!%$zBbCf>iH%D(z= z>0QyDg7W3&wfe*D_M#%ChKIfIhxZ8YKSt8<P`o2ub#!$()v*aeMWHk!*?CJJj-3I*m zx9yMihrKKHPZTPiR<%;IQUe4>kgS5|Rj-%Cr-_|EPk!b`egphBa%xruR5m9l(e>zC zyZwPd79L-k?Iv;9qA=B^+T^+>-0YqW7BsE2Fc)ww7 z@}1QEYwK&ht-JMy+yW+s`Zb1zSL#89MnEHsHPZx;a*DS8fqwd?3;-B8UI>z*1cO{W z?Imad&PeED9o8Z$gRvo*aV`7Jza z$KA+ns?v%bUGea>zj=@6MToI_uC}xqTTZw06~i+oP9z!(4T}J-7bh+1We0m<3Wn0! z94y9R?j~~#7)^Q?8gR4l-4RYn(2$bpM*ClZsmyOvP1&~T?XvQ{K7V_nG2@;s!cpjgH}?^1fN#6B*$ny5(@~buby$4`e-XivQ*Ca>9vF-P1ML4Bw}1e;N5+z zy>3@?erNtb_oc(vw)bWGbR(~f9Dk(_bBRN}Q<0P6Ij_P@ch>i{cbR_i-Y12iGusNi zPZvdHm+5NWh>hLbZ8Vo+c|0=`8S)W3NhtO z2$0l>N=TBy@Iy+P1n7i4Kt2HnQila`r?p#t7aeX~PbfMu7GAq729$#Hbh-{Jt-TtY zp1GG{3p*3jJG%8k=MhIz(rw$k#c{VcN63{ILCV^5OVExnH1P zNv-?M_G3#4r5hd>xcX@{;KEQ)bTc!{?z7mYqMjW`(78BT!<0k;`dw0ksUY%j02__x zE>Ze`BS#C-;bCVLFV#OKOBfX6CAE4D#2<}f zsT!RFivrz9i(K$e8=^kbFOn8NxS(XMwINSotFn|Uz{l@C4n>c{m+G^9o9bMgZaG}C z8D(#anE$cuRLcDJJqJ(^m^qRLL(dRl1MjQ&7=!NW9KE$S`KW1YxJU8Vk>cJ}bd?AO zPU?PX-g^=%FC{R3JS`j4F!i`=jSt;iQ}9T74&WKhn!9x>L{^L;9ayjL2cakc7V%iJ zaqa|zHdYn|ye%DK7_bi4MEl)^F4!b|dc*&SyK#mh(ntk)$RFJ_>d#Pha>$eiWBvVq{0NS z$QWtvlzo>$hfO6H9iBfj1lBkNj=%JEvh?rj+&Y+ap4$Jq?s$KkN5pHYa)^JaP~#bj zV0}pg90Hz=OLfUy8LkN{iz%wz*Z*@_a9A@@ z5;VP(ilqxjn|ZO)4LXOjv5LD=LHj0>9is-mI-O7w6V5XLDVv--;ZusMz)OIU&tJcF zA@{&^g&KqSs%^)3%5>C* zS$2l`p9trr&E%~#9v$tzwEuiezOhr14KHS5_X5kST5rWMTj&~%6{`xR*BOxjTvvYf z4Zc`6MJMGhT4YD>XuG}s+wc|s$2CKj>5*xml6Kx^r?V#jL~M|;0k>6Kw-*@})(|${ z68^BpsC>q3gBl2Ql#`L+c~V1_yU%Z-v@OND)zL?pftb|Kw~(m`6U~H+)nff3<0KlW z_|7~~*iZC&9u&n5V@|F}yWVNn1^Q2HRv zWID6<+LhlQ%UWL9{krb)S%oBmG;^%baU2%|RO#c7?>24~84uilyx@n~KvXu9`lqkmy-QBh-&^P^3z#QERZ zMb8n?0#RUDEd)CctBYKANg=u7{k#0tm;6C8&dV7`Kkql~z20g~x4T)hB_}Q{hNu-G z8=5VnH0tMX38C4n<#5>|SZ$TK&5MO@Kaw>3gOR4jDQ8-uDIlC`D`EKzj zjjL(uaTGh=r%Phf_Ud0!zX_Zi_gJp)AH6yrHw4q^Cvh0`F~;Y1($T;97}oocYxACi zi+|u_bnTT*H~-7G^|z~%r;ik*n$e9{9=YZ*8#Bb=RW6}&acmJFvvtd>+#jXALS-Pq zMa!C+ps7$165??}5R3poE5g9u2o)>Mwv;e>(n+2eaB@(G)YJVgIq#MI-J`>y>22ky zlaA#M_dC!gj>y>|_{3R~Z60DPsN} zbY9XPy?QHEFvGx<0YR?~_tI_)ix{u2w>?FXAMyUxy?p&$vgS`yY#iU~G)XGEcR&9g!hR;3i+L&)R@e*Gu6i+yVH~2D&?Ga&9^56I( zrL;54_q!lD^Dw?EE9iTH)U^kTsWJG{yhIKI(?XYE_0%{H2t$am6oyhE8zpE0*EJHtySps8MQRG-k!Ce0 zT^Kl7(vRnpu9JFdAD-<}l=>pl!Q>N>76ZemwT(exUA;jR_w@Rr!@ZSVRssL(zTteh z_%nFxxC;F{F9eHp>htiKUcJI^KQU9IHxxPt=PM|=ZiUE621iLgYh68c)X%jk>CtHQ z7IhXB-E6n`9uc1_I=_w%-vNrsB}w?gD-}RaoBH)$R8nyOc2|Q$bC#vAgAT&-KH3N4 zUpek~Z>ztVhw{}47V%~83saVV+Ypyh^V)*wBfFYO_htD7z*QNcFc6y2Ku64YaUt6! z6*dEy>cFTj$pt9fmLdqk*X7bNo)=%tKIN`o;?t5BCrNZDZ!e6?dm9}r8@28{3flcV z%1cm~d4VsMP(p{+jL`&5U19DS5zvR07g`DI*Jx2JuM=e`f z%3o`;md;UJUd3pW%#E+;II~|(W7ytn{r#|HzIFf8RuWPW31~Ob66VAJ<@IoSeAn~r zcK7nE@>Jl>)St_=qg$U!dxL)O^wRU!9gg*`C^ikaFAsr~`>T_HiNLEIcNa4OT_aO< z+Hq6vQy<-!p0WxzaN#`ZF3 z8XfcXQ_F5S!@*t>@amWP=3VXqX%TP=kmhrQB8vm*ISnZS(G@DR+8PM%G_D^$oL@Pt zp|Fge^-^cRz-IHINmZe3f`}qLd0SnhdQsphI8`w*csD?t>~7PcoNnC(VFoDl+kc8- zBF)tYa0H$I6x%kx*NfZPC>OIPv|7<$&ceKWM+2{BOt>dyS z|B{X9vZkJ_e;=2ce*KwuQqy^$;{^kN&o2oskv@YYJS`jrMipunr7{?SNXZn|8-T!UzV9stYv~}lZR?u`Ch1CiU>Vjx|g}yWzhwvid z2Xfc(jZ$dCf3qL_Q3v?xG~Nn0>NS88sq=9((;HLvi!K;AaMS;3vlmiicNCuGch+?e z_vo-{*gmiju!z1gco zTV&qJIYW02WZpXb#k$~a2A$L%yuqRlLv$1X$F_SK9MS6iVA~HJ!VIdi2;q5=+mBGh zh?B%Iv~73DjF{^E=2J(mz)e#}RrdQ@%4?r&#+`7~+|nkZBG94K2)qGu%R|@lpUc*k z_QZRTDyUO6AjtTK2=>1bh@HW;?sYpYUDDuYCy8jod0@!Z^|jb*0UJ2(p|xe%Ydg!b zfpHfj)J?-KM!>lp-Qw;s(q&Ei&nz};ZQiSxe|C{ZyOH>C}=o9 z?6D}GoG48&u_-9R*>UJtAWhF?gRZLIg$AOCyY8r-2i`*f+Tc+_9cny7LVB}7yT69L zfSdMDNF9Bde(GGawQx1&g`ggCNzd~9#S%_cd0UdZsUOKUp+euarlJr#2!K%k7?fca zfjO%y^%o46$|v^|R-VRz%@Y^(J5kfWK7l*Yc(W*O631Z)c=8L*5Wekq8^biR4JT+K zenp-QztICjvL_Hy@;K zDX0Hb#0!}cf(eD#SvNA=)T;r{uCssy;txQFzI}S;Aj~I*HYApC`U!U-mh={?`}DLk zcixmrJsp%cR2NZ;3-C(N59ps2TX`~nKgq!5HA`eg%zPMECa+SRHE#ya<=Jq$KRA$M z;`Qg=WxKKEE57&g+^B|^DS6*FbQ9cA9LHwhY~}}5j}ssm#UWl1kda^=fR=$pRwLE) z5|0YNE6fK^l{Pj)I{6u)aQ~4}OPb8q^HZ%`8~lf(!?U}rinA2}jDZ3eXq%3Pv0*a7 zn5Y){#ccT>>_*+i+tt{Ie^BeRrmng15FXUg1!1GZZJ~6${zqwH+0GBcP2@cLgRodm z0~u`=)95=aSz}L1I-)%i(pELBdv)#jJQS<;ei^zmyJJW`Z&8@A%iLpOwh4?-KS7~3 z9t2pgiDQePGl6TvRQp|?p-{Qc5mmtGeKaT4O*Ubgx%!=I=oo$E%g*%uTK;{fo(cZL z@|C?Y3Z?%Hi4AF`X{7}UBzZ_3XI2O=xo+Y|%1EfVB73|f4g|gxr0P}*!b;UjiwaJC zdYSvhv7Y{aLMH$BpwUx`ZTXS5O7OSR*G2)fXy0wNkM0LZojDGG7Apa^Uw z{vch5)_39_%HtF45veW&NpV+?e)r@kh-H zTc(dZ+M|0urOds5-xCrtX?voVqV#sa-8Wwbv*e~hVlHMm3;r)oMAD=67j+s=3J75Y zbjz8u(IxpsXfU={=~LmDRly@!vez#?L)`(&c)hELC!~nyp~Zvd>8DAldu{DU?_HG+ zr-o;CS=mWwV19XXBI-SMoKv;Fi=&_@y#Z_8_ez2<5g1>dYW&mqL7*h4bbg&O_VpV; zi;H@jq1K{I&AVULR-PV+^0%%}I5kv|yxW)nKQ=SPR+iR0%K2%#bgu4dQ-wvTwlgU8 zk$9LnxKy@ctWWhOJQADPp$&t9-LO`1+QO<>P$iF z%fqbTq>}xwCl1A(mrwO7Klu$#@bB_+PlL?9z>DXD+)Vx8c`LVLG7rGNkMP8+Avch% zuK-+j>oUqF^66Qfr{c|maMtc-cv-7ch!sj|;;KBjZ|1kC{mzH>TaFQg=C2vNW1IJA zwBf{pXkLxX@x}@>UfXKH45YQQ*q58sZim;`-9;=9EFh$ur*0jV77?NnK`^Cw0k%_v ze`#ujrPtrzUp>_%My0y?h;R+uF>1)5nTjMLCs;tp z80ry!x_stz5-vfK|G`>GbVKnD$S(N$EzqyM3lF3~y&V%hZ%u7{5S`ljAZq2<`#v)y zV{G9~XG_dx-)-ljD|SD69@Y>Z7RqFVpfth;smZz7@V>9m1keUYbvneWvRj=|(uKgD ziD-RI04N~PSrIaTHgK!7r9{C#;RCz`N>r^Rf|gxceSDC*A?6CA_Wi+2o@Pg;AE=Bx)W9Y^!K3=~DxF$qjS0 z$|58u1SK1m7FnndQ_TgOIUCupg14fgAu1QG4|aBgcMcs)3h~#O)fcHTJSxi0;}KrT zbWG%EhyQRKL`2hePI%`2YpI@1N!yDBe4H9{NHI7F3MOGh5j=8&4a9|q7IE=`gNhUo zyo*};IjM2s4fT^)2b?qYQAW^QhBY6jN(`4%XGG|AyC;i3YMZ{Fk_YCjqUFh{5>6T| z{VbTV_~s03jAZU)>0W;PY-#tVXI`-g*meJhMKbbe2+`u@2}blT+-82@^$F0*7&|e@ zpDqw>7GNj609$wx_@pQ@=7`hG@kbQoft5XFA5tsk?wI zf(Lv30QfOMz?#i|dWO9$ES?};9BKt2;RV}`=fFusP(R2oGJn0krHaSNM+4)TkAK)d z`PWRvS&W#HU#xvq#wSnPx#6BhSPda@wDf(&aSamXp?mE(B99I5E3nw5dx_avI#I~*hgp~-VCZ?m;mUob)4sky88F8pyt{jp7n*-@m9j^riDP>=YSKhwg>%& zHiwZ#>NUvb9VXM;F{hgT{c-3J;>7P$~n<=9OFH24C_F>H<@4rh7$ zH1=dL?P^xj%Jgja!Hl=^(Q5C(M3=XFnmh?iLNdK>5TP{{#zYem63{Jb&D0T|I3MEV z3IH*biXytka|o|%7(wb#Rg zS5E-w0L8rr?{4W{dmJNXT3GDj+YpVu42Z3e35P0})u3|8g)}7mmjP;7T)fL|rpYX1 zQs@{)4e%4W6L48rlCd=78HV?L3Hu-TpUOHf%d8xcYaQ=*EXM((mPT zg{mSlpI>*(sd$Eq@O;4s_4qv8^t~BN!mcU;3XCN_%@LVr3~ucV&Jh**(q(_p zyfO=<{uJIDw7b%BW%v<@*FWejh|al9RD-yh8tO324&BHp5h5>ZPjG`+2U1gwErQ4B z4t)jlK3aXH-H_zj$K+>xT`E*i5vZN1)RPMAWH+*ZIEi#I~AqS z77Z>Bu-l#aNUk>cbet>fCsBI7(KqlPS}^kn0Er2vR4HBp-SH3>ffIjFg$iHScJ1=3L9MoK)}V(5%k(AEj6!~;h^erJgCcWg>3Vq&~3Ia0xP8EVOy;GxR= zk&Z!HKR^E@D4&fzpIPSMB|@NQt3#r7tyFOiZ$Z~XlOT3qIcWB$b%{>Q`8_(aIs>ti z*fiX_yjC7?z~$>&VG#nFGV}amd_T1@mMPgJW+^ot)H9XjC^j(7tV(8gzd-f>WaW0{ z{g@J#=-4e?NJ)UOm=U7>B6c^B_T!9J;5=2@`rTM5<)c{1TG#P)ygao{u%{x2bzyou zqplRXyTVxajd7Ar9oNLfZF4|H0CJBlDI|+%~M!0ThOgj?9e` zrBppIjcr{{wP@MYb*2Wz&Bdrf5EQRaMn~_9GMz5_oS+fM?}rgiYTa|{^qf^(kH+E-R!d4o7j1&05+_V zd_YX5#$K!0BhpXASlq7~RE=xqP19$jqaCBSy>hb64H^<&PWBTei#V+^+kit=R{v4gtf2L3i!K)u z$7|ix-owLUq{nb^?dTgG)`sR!>Re85KhqJcNSz&)Kt4`K;>kZC8pZ9@`C?sd^`Ld9 zExlgqXTZ#a_t9suv1<215PN7qa3j!_ShJ!D9X4<|l^{7#+NDMl1YOKMv^i0YbT1es zO6^UxvGY^Z)UJ_Zv(jLNEXB)TF}%?2zkb<5s(CIHr9rYeptMN^&XGHQt_>>qGhFXA zo`AEv7w2{2OC(52Fs3h6!2;YOj_R7sRRC(v?@Uc^v-M0k+dckG9ORO^wlUg1wZ3IX znv!n3{zovRRVUS-7eFl~Ww7N$Uhso#MW}j9-!_0~9fKAEyhNCh?nUx_p>hCyM*h9S zEbhW>8VcG4V~;Ga6YBq{)d#`f&Sh7|U_)<#^&3$23-Jz#-S={m z<7h-KDQyNM;~sXZk)L&66lZ7cK(W>xJ7w4O;OCDgtO{Gddg@;`N6MhPwfbYTK(+5k+q*VX;kHuVeeq_2|qebT&bVbeu z9QDj9kKG-vNiFzOEME!(LimTps>$DEdtb{6Or>)EF{3v%bp`)AnYVmwnm5dwbx;Cz{{V%YTy(>BI2>MYk+6UGO{NT7NxLE%H) zNr_{FF?l6W66{)*h7Y06%b`VCew2?~l1)Av*EG19P|%!|$KQLK`pVp8v;d?e#cNQD zcqr5fS)?W}OqDGMD-UEVNvhDN!t;fkBCtYYj_UvGr>a(6HrKYy34IM_IEu4~Sxez&PCp3bw%Yfq>GfaH7H$<`9St*pQPa}1 z)Jx1uExWaP#=NGJ*JzAgIqyAI&)QcJC(h;=LkGYEWCS}(&_R_I$F?}D5V#@bJ;z#i zNZH}F=<)u>#Q!8})~)V&bcE-C5dT#D{`;l+mM*?OHBKIWQ9KDqrbc>g>|w3@rY;R? zPNir3c<27^t}`fRr0VEO<*jhKj_el@2qz zN^}fWJhWyCZ*xP6uA8dJ-Bc(a#i;qYSGmyfICToDa++?!{=mln8L9V(6ze@FCtK#` zZZF?EvLR+ITb6!;2yddM0#U4=xO!jNS+wztU+a9B+*&}aj0yu)~Nm_^7X$x zI&$Y_Wx?;Nv7Tl?BK-d(%K1H!oE5a(Rco1bdakpQcuii@(Ox>e%!mB;Ay*^$V@%}- zH%vCo^HcNb-X~UX*!B`oF^Eq zy0`8R%T;|CXg&DxQ_PxA&rmA`HIj|4TgPQ+9X@VsetX@B3J@{ckP#vZq9vS;_qu>j z(AGIqgdlH%jxinUK!sU=5XfTCz^_Bu>BvT{WKbw&Er}gbG+t(M>gE@_PlC&B{Jc&I zayzLTe*A^=cwDI`!~2nzrOGiA&G!EZm4!~CBs^ULcX!Eqf9Odp?LF8mu45s?u>vQE z+R`Hw0Lz*6BF#(#Zgoj0KbXd7%X@`8sw5eeeR3epY+GgM3Em1LL1 z0aRrminc77#GGO6fM6k37IOMYkXj8*W*NcS@}By=$Y$nl6;?!e$2HI_UHhMvHalPZ z5vWV*Kr3CI!qmD(nslzMvk2y6JkDFb7gXj{+qE2*i2^WHBK4P>oh(SMFl|sX@h~9X z_4W>VO|_4cFc2iKq);3WDkK9BSmJuRYaf(iEqDUN&ILkQUxKh;QHQjU*NchZNek8h zmyVNOFVKK?|Mk-gN&;IO9}M>ti1Dz8Z=~L9`D3gS!5j0^4BpI$f;wu-!6{6H-cKK6 zS8R5_Ra00~jj5=#A%Q&`CQ7Q~YJY7>h zh?SSw{L(JjvfZy0H*lWT@wGr#XaKkz^1A|;a?7OM7+NhguiIn@2k_^v$#MlAG&4wY z49_L&ykyT36=FI7Ml&!TAq0EfPy^?oGWNF464&mJ{%go6= zY_Xnyc^MXC%n3mW?g1`!UI@iGb2_y{_HZEUo4udEMNjj3<;%w2%sf%r8FOp^O7Laq z!_e35L5ueqE43{2L0ZX^DnW^p(;#Kw0r?w*F2?|bhQ^%Hmb-ukKRHGEAc(5~d5%yC znMs&_kg9ND0$wZ02VEAY3L%fSb01uG-Ly>THc4XqPYOb>w4TW7^3jUucJla?_k8Bw z&*Xsw8k8VAtz*_{R*0hfX~6neI}B8Z*ypYtc1bxtO365H1{U^Wyh_04lo&t>!o8fS zeMc&`@opaxx$HZQqv3r+Ht1o@= z(F3WNQW3xDS}O`nB)htj3Sc_tx`w*zvzi}0I{4?u4_n)twv&yG zTeIDkz1KhY#RtL{e2K0jK8oo&ZT&yeKcaKw9GiRx^}w_OoXUGcO)Mvo(PRi|jVS#C_ICTO zTCH32D{HvZj8dsL1iw4p-W+?nSqVE~@(Fg!G$B;rvQ_*mw918@mq+W$WdD&o`=geD zl__v^jH8C{nJ`CXvW=`#O6y@_(Z}yxK1h1fGlCBBAM8Vj#H=7*FA=)i2O96aSgv!kDt<$7M4HQ;G6 zDpo);v@H=a8;tLYdlIn<6;6L}^*s890Qx1?oCss*%yr9k%Ynn6L7uCuZX7LK9_p04 z8+tM4W_p}_s!4_m|@-W$v(5KbI2w-UJ0uOFYQMHb=Bc-Ut6$TQIitG*Fv zReAOqz8sP7j@tLa!oE6o_6aBMkJ|6(v~2Mm^)!EJH@|8gi_0;_d1GJM#O$r` zP49hr@OwSQ^pBu0ZSKShNi-AiB~w?E`V%B{_u4Deg}4e05J+tdjM;9v>0;;E5CXI< zhx{#{5D@1XzAqP?91Z7g&~-$LheoSi7ofNVCTNSvM8YoD+pLY>n3GlQ+4y(Q-uPm= zNfPskxCaDBmyY=V$JCq0L%qN6q;E+@DY2Ohi$d2 zc3mAFiLLmAqra_#skjB*CtNJvyn7WtCDI^K*=1Gn(CQi~(0foMsE`uwl0|l>;erY; zVD}ZOt?ggmnp$@y>EF-XSD!WVokeVDt^#AugVIeS4^J0-*%)lC9Js+4u0KWi#Bki~ z(nKEv=P|zzxc*|-(R?Sr`$DMKxKp@FVy$7nd_Nw%vmom0%KnrM_9wqO`QL`+207w4 z-3x4X(Q!xZ7o>K@w+qani~TB%SvC<1$*m)=q7O(!GjMip3AmW_Jjk2KNBSyfV?AI= z^Pg`qaFCo(THx&ZTe`g?=el^-Su+Lqwoo*_j#!N^9O{Wtk>Xsw9+#9DzHYmV~g`GQa^gZq|m%v#^$%- zYzcVM4zCBSprPX8HR|xe{(e8|fo_9YedXND#Y4}H^mL-r^|+pOh8(k?LY!SG|KbKW zbx=)zO3Z0WY_BIPK};e25WAC1HGUnArr^%X`%yC0GB9?ovL&|lN+-mWkcPuZWV(>2y&6P)Yt zf@GHk`9XEJaFsE51@<>(8%Jla=0`3PE2t-G)z4HJt%>oD2m)1{Ek#;nQwyL! zz8f17+zj9`5-m^d14xw{-OM6vHUF%Wi`*886?BIdR!v!7=oxy$a~B9IY~s)k_!gSO zCx7&~;!4uK=#GQ+^viigMxOGv`erWXf@8lP*_oPTmFPYAxwM~Ji_>kPhg+LMI>Xj) zZT#D?^tWy*5~R=aV`-H-5gUy?O7P%bp}*NR4mD3)OMYhR?Ow0X$!lk;s-Jt49hAHi z=s?!G*hIgkzz?`cu7g$x71&w|DSIf8waFIo;jJ~XwWn}eqCzO>=QroRt+n0YQ11vi zUpggcLKBC}>$ow8I@5LfTV=r@@w?bF1Nl8`@!XVT?q4Lx`6A6xZ zkW8I!>W9=df)y`{HA_{)yeLc)#}$hkOIi1E&2DCzxRxI1R{V(Co*US#_4>^j%hZ3v zEOo0oYkqYtHLj`D1^%|SmaGdYA8_k06xIr9a(Q2e4SxW@fa;3~rJr&bvptGK7~FPl zFf66m;OG{bs*4)yY|BkyTJNcj}}EIq%aQ zB&L77+L(A^UOq*3wDZ2tFqjGfpo*e*^*?6l($2aBwLHkwH6~PgLSDfPKwkIQh=SJa zMnB3FLhs^Qq%QxeJiHaV&seLNBndecPZ#kXl;L9P@bGO25-tFZVk23y`k(2QC{JR@ zc<$c*krl}60kVU?MCh<)#RaMa-)m!%)$dfK<~|K!n$+6)yJ|RVp3kG@*P*R%I^)jI zIGyXp%M)WMUxKw4yB2n9n;fJzf^kXM3{@yS-e{^N@{n)IjK+Ic4{dM`&ZAn;`nS># za_;PUy2W`#IL!{XDiWV#$UE3(5&kuigf4;ym<%i?nxnyK?vEukO&)Pi8ugA}ak zuJHGDlRqdgDDmlY^8C{0HczUy;orOd&tEhfsOu5p&8E=Bk*+?bF72u#IYsIEAW3)7 z<}%`U3w?a&^k$j?|`N0gc|Qcc&1n&ep$=6$Q86& zBf^&Ly-n9fe(P_w;YKs+5}iwD&i<$Y98mR?Ub=V;!sEE>e;RJ=Rx{txu-cbAScDeI zfnb&7YGlQ+d&Ug%==INOJK;f_cbC?F6alTex*;%#ECNgRiE32CdCzdG1+5fDy^8$> zOrc9Rev_{Bieu4JmUvm3w@cT&=3+ruMT;k})p@d`2nq}L%rt`Gjx~1AO%d8ttzu`7 zU3B3K)Jl|`o&L8>>K0+q^!G-sVpbbVNeN+fQHnJ6H0*SL^^d9Q57(wL2&}W!C3&tW zwDmc1=BX{-2hqSU0`W&}?fPUD52nt%s^*`nn)JKBDVzweLWh46nL6ChKKcRYs{20j z>k(IJkom&r;kKGmC9d#JUZ{q*wXIO+$`@AudB|rS-JAfgo1?AiY0cPB0wMz6I9nWk z^9A-Y4cC4QoPVL}u;Ny^ZNJsx?;5$DQ%wyweW1PFt~6Flid?kH4O>5IW!sfv!Ajg_ zez8ugMQf6b?Z3A(;LIOu^ORtf2~cXqWcSsY2i492Q>DW2aKlu zSqB-h%k+@Q%nHZdk;u&7Dv?5alm*}SKVXHxrhKDxSy zj5*&I(g6toqvaT{9v(IlJs*(rX_?O@%GK9z+9MFsz<|t?2tbW4pXIvI75xjYH@b~# z)b?aLV!t8bZ3o`3EwubG^RH8N2UiDpD2PrHMW8t>j#Y7~!|%Ci|C_PYASVS0@u}>6 ziOhqvJ!#mA3kPl3&ADUA)hy!vhlW`OAL+*=qUSO;{*GSK(dkOstL9EWDR=TWW35j@ zI>N)QF67FPj%M~J)^gpMl*?yxhBry{6o`-_d|=epTk<-_+QhKK^Ks_yPJW{3d(M9w z)&6R9m0ScupC$zN4_-va?f)_0ad23C+puOn$r14o(49(!P0k)jnO$)4>PWSVw~3@R zqw5S6mBTA*e$dgL2DX| z(yodq6;eP++n%5M`g$lXWl>ApWS46IA+-PeFt}hNn$!Z)JP(ibqdVdSNuz%saSeb< zC*yYug%fO!BH7#nRuhHdo=#c%JR9SuJe;lPqI-Tf)lW+6yoMmz!4Uvi->rjs|D-@R+O^iZwFEl%NXjZij09PI!` zzH!^O7A=56NgY~A{Q>aIO%d;=1KkjF)hsySE=({0Pgmmjdc@o zfZ|#hM}UQdR^te?mYah6=$jYdS>;hNpswA%whkY6jcK_C{N#b9(HCl~uTOu3glmFv zCQ0JpvCl^5fIyl$WE(P#wk{z%K?!#G9HRDG(+WVs)Y80(e|A5*rdC<~wloXA1ti=f z<(=BPo|@}fv+|TFNwj%&oyff!p7Hk%7hU->7u;=&Jf zTidovIW*Ctz_6AG5NEW*f2l2OcIdtj-(u3J1%c6K7gOl<0c4_GY?$tJb)MJ@%|e%T~v&f5|V7GJK6Cr_XCjM}}4 zhm*KQUE6^4cmao|OeEY8lG*dR8Xf0Rjc{<+)`6}^bcECMw$|vC0mb5oXICNI{mC*P z>5gONV42werP%i%xEV8L+y!2&V z`XgcH4LArndmN;zk2%Arf&f>}IssQK;OcPb`txS-TiBRxPqIB$JoDfU%hjU!5i#Fk z&3#z~NgEqWtx09FJG7mUtmeZ6*6r)!`Y{)4XDns&8vu9A*>Q^$fM7<;Iqb~ZQ!l(% z!AIYN>BIH0rN9tb;IqPIJKTZcT5byvLkhT=E|2jjm9po%ki1?K8Poa=R!i{Bd|lta zblKV+L?B5Zcsgk8IqGnZta20>0)>TtPtXYU*)74h49(ebyh{|%xH$hOrh(${w z>QH96{_WP2>iaHE0w3vp{6MYrI~x01@=rMc8w3?P@-Zz2=Y8ppfwYsEf*l7QYcy_4 zgny-`c}t5Az18zlV>H}$u9uJtGS=gtOx>7XdfvYF$0EErY47W5w@aFg1p&vU&p5#r zVohYsX0-Qt`}~y=i-{B;OACjigish(E3_JQyREx_nG#5kC-fW77oNmr{b5{gL;wLU z2U}R53I3+=*j8t_G4Q0iW7 zzmfS#N<^qu!!q#?`>KaQ!?epbb7LFtrTdc_WQf#5`F5es=5P4`x4UQ`L zfIx?cO@frDMDewTc`?=sX>Lie-t8y~hJsz;Sak_pp0DIJjze+e2V8|n_WR$EtRjNO zcgO1CBme?R3gTK?T3P#7O-_3qx-v||DRC;fH^qH(piyEKKGI#uF1rA?2Iq$hUHNu5 zDl`2@9VHd~)2#!ydQIq6KMeAvU*@-_eqnvEd-wMh6{jy42U=*I>`f6J6n2vABE+Hn z*-gts=;UYr>)TCzzip*obp@LJ4lsZ}W77!InyW@}NxjKy1kwXMg#l4y7aT~q5r?|8O*A43s5j#gNYR2vxrv6zc9Yog`()k2a0r!1( zBs2T{<;<$XM=TRtlYT_GILwW)7Uw%-**(04fH-wdf$?779IV~2rl5$qdmt^J91tF0 zV=AuSbLC6r+vLTw1<^X8n__F-eP2TYEP?s?8q9S)6T_ppteD~xyX$qsb!85wH8Y+& z-_Y3()s{wD9<^m+JqLOochtjkpi-bk1HP-?dQct2>wt!?@vieNIq+lrt?mv9txWaE z(C%8k``;*`_hS`PBVLT<0mC&*!`J`betS8|`#J5TgRe4(tFtowG6s7-R!c!HL9Wj5 z&hgagZwseA@s}>$gb3Z}st^fM<{4-6JdH%%rvq;3s8@Qml->2k&BdtDExM@uOzN<= zsdSw-%qY$hURJ+ZIiX!_U=qYc+HQ$(kMLaBZ8x%$HBr zmq)mh5uipNVKO-i{0_|m&nfA^Yi{-(o+x1_KV?*K`ExmOH`FbWdbhEn$>-QpLH05s7bQ{*ZCD0-R{~-r9|1!MHYYSl-OUd)+E(neV0bBl2%qf4~-;nxo z`s`W!4(-!#yK-il%E}8>kBZRbazd1qubp`Q5WF1^clejSI zq7RVyw5v5NWSeZd@lJlbATlHG=0c>d!u>in{zwk63(*q!I(i|O34`x_phH+l?gquVe^r7b54$mk z1l)tIrFuZ6v_GVG;Sbp2s@ETc@o^pV3(r1^h3{ocxGD?QIVlU0EAzxP$p->yZu0P= zzR}?$JNLoI#`|W!z1Vi}XFXjusk%udGOS;&yp(|pP!D(7TQJ(6`THi;vRXW$p7Ovc z_{^e!hSa6F@p{AI%T@|O^d{#^GY01&Uu2hqJ_lPu$qonU$N0Zw?h}F4$n(Og=Dk7U zax(IBH(B!zhq4~TO;lP*JRo?&ySH+igMgc?b(6~tj(=_Bk%tS0)gDz>$o|7HOg8xr z{tD`mCNd?^Uta+smFk2WI+S^i4P^d~bSR>944-slLPIciE7m-aky`D z8tU36t?5THT5>kKGE+P5&AGV`9-?(bb5k55zID1Sdm)fI>3ZBJ5Ko|EPC<7%MyIi8 z--VKJLdaG-{R8fEH#gRELmmXIkLf4)4>-GcD8v2@R)irpIN1U%^pNY|Vq$Dd-!G_D zzD@aZX6{?2L4EzH;uZ&KQP{!UX9(+*B72Px`)&t!(1pG-qg_%Z=!2-xex1nNOgJ2* zPpq8W5vW~k{Xte+w?|qSgUY5b}h#qb?8oeH2v|Hip1%Ds&A{IEIZ zI`X%2pGun&zhyQBZ2Z;A8#_ug^g*EROSXqDRCFG-hw|CfGjRO{XoFYe?z)` z9#J&dwOgV%X)bO}=vCy88|@nl(knVqSZ7>|RzAnk*r3wO@dB>HAY8{e?8uuRJzzi1 z(Ib~EFBapYkk zEvlhH5JPPPC(o@y|B!Hde8L=vYqf&Kx!b<+*R_w=nlBA-E5V=u?8mgQVq0rM8tnTr z1?{|!JcO?#^FU&|&nOwL=eYhX<&_pgpsh}OCFge4y#Cf-C@lK;@gnzEh7t((BeRL26pYY=}r$haD#IkEiZ9bs6$KrXG~}Q z4Tx)QAc-RW49tk5*|kPD0duE7??by}L~TpMb#^0%3zKU_n4$BCc7r+=gGIlOUZ}@d zk6kgUzIF%Q1A=2$_t657I7lhT0Onot}OiITVB6}0fXmr*@3h7_xbWeO+ zyyQ{)qt2Qw8Zf+yf24lXvDmgP{w^HTxCI(XC7_jCJl_8Jl0uJ}Wo}H1u4I!PVM>A{ zr(He?@OV1fdil9gI95XOobBXGwW_zvFLNcxpn`%TeqMc9#Qtd#)O+p_e zwpTA?jlzqp6$c&$WdFB`i){W2;H&5-^XMsMtwxK>mw_X2qK%ib7z-^a=}qav#guPW70v++lXAZS zXD;;=$aKYKIe`Vt$ykWVK~lR^b7rua5qN@<)%pDu3vxC8nZNL0%Y&i;WDW=drm5F} zb3=}s{7djuq7t8uCog^*YzyPF^Gr5d(-*j26#WCP6Rh|e(2Wg!3_`mlJ1M<4tQAY2 zL#|q+Xb_V=S_Z7n+m#n95*%xRdN-<4xy((_YLzh`M%DrF5}W1#5SC2e$Jv8=t)wik z9uo5Q$EPsfKM&VqoMvFM9k_|?J3@VwTP`+hscD2i%1kqot($&%+AD5joiaNkd(Rf&~~w7!qz%0?NcB!@t6M;9RRVwjzz ztkW}qZi_Fsbx4p>mM2PN;>KLw z{)pR9X&Su}C*L-qKYC#|)f~|zVIO7K1H8_peiNbTDv+a*!ZyFTwHV!&`UNbtIEYC- zNX}+iV`Z&DFq@I1o!gnY74DU?caZ!c*9dl{oyX=2^y}2?ZXTGVY)o`K=>V9!x_qe9 zf(r`)0#gFWz5(2RJzQup3vJOSO8wLMa4p{|ID*qWAubs|(S+I;cpdigWWW;X&y7=p6NvroLj zR%iVRay*2XIDb;$yKEb&3cZ3H8C0*Uy9kN!GEYVJbzUg5XK6dODDb2E=O?!}UVoY8 zq zTqfm3sIPZHFyRM#W}a8^278 zr;SFppD#nnu0URcc>-hTMckqWMSrY5gj%t0gUz0=?ozf&eHKf8M!Ge6pb_$NjL;Sh z8O!OTvpH5%K7^j<%jc2U-ny1=Z_Jx%HM>!!bJ-|cgS8Z_*17zT(kpbfKGWrJrV)mc zSr!ya&EVfMbJq!pu}$id^YQvLvJ2rjTnpa}Ze!v~t5E6VgpJL2VI08v*))unWaGTFk^ zW01`75bZAa_-nIyD_=jQ&LyjkT-=YbTCE)~)GUya7h}6)@AW>wFXAZK<4@w&mY=?# ze`Vkpr5Ph%0Rdf|@xE8zi0bBi6ror^sun z%#3rQm$<;jM+799^Q#Jl`GZw!bG+qCha_(s1gZqJ2I`v1^-D$;^@-Y)q?=t~3m?qO z)4b$lel>a;Q-`S=*lmDnsgo_jE(BrfP#HRNyh~Ln>#wTgua1(XGOa^UJrU&u%ZP`L>Ny*aY@>35CEJkZdL{LA`;fa2=fu5L`4GdJ>nV+{X4RH$A-0Yl2(cS0X*qiMDd$$8Wj32*HmAsl`94#b z+)wPUXN*0f`VORzMl-b|1ssZoHvaxN4F~*tsUAX)IlZH3nVu7wBy>fdE@fYb?B3ix z5pK-KqjDB`p|g^lruNdob`4#M25Vh=V){kWQb5nejiBRPBSmIlZZcBV@q*qp=!7=> zlcaA7hC}-k9hjyqQOuiWj++q$NHxgld_IZ^%Cp2y?cQe*eK9`Ic%)s6Q)o9`Oy znt2DD;XN>O2svsmVDz~VWNDcNeU~@$a z0Ye)oaVHA%-8v)qpE2_aMg$O3l(24pUEnrSH5>1tNH#88|CYMEwJlt#7t&)A z@gX~+_c9bgQ9CC>gy;Id>`?M%2yd0P@Y>V(R;xc)>sFU0dsxZ<>0Ze(ImHSI1#gVH z)B~W|ai{pOcqikWm2(S!^pwXf;+aV!#(2gGE>u=qG;9s^ao z+=sI%VFWP=*~#*8^%6BYb3d-lhb2#&HxNtG%M=4h9@rAw!@b0}#!bC=0QmN_cHdYd z$XP@HhCHk4i+x)tiU9nNFY{)gaMjW+>YnH)z1g2oy~WURxCNyLJjL9_$^zjB{PSjg zn?Z?JC4Gq7c=8x~TsqtWeG90{x7E5*x^wu=O)s1 zw`6naJp29(Gc`~;ohP6-`1D4dR+5v{iXQ&55yxI-^_#`eMAB5AU$3uXFsX=%mWd!4 zMaAZL^!?70?fd^nR9zsv$ltp@wd0Hk>-&bJmsJne6J3E(>MQGgM!IM{p4&S~J8e(; zICk5hYGdiIum7Y65x0>5+<|`g>SNLT1SFswD%+W&IVE^y$Cd4y@gQJ2$c*D-b)ntp zqNxj~S=|H@Ay1dt6PS}@t#=^;@kR%^0U@k{j&$W_J>-oEPrZMxLqz61N<1nzTBc8Ir2!Z-pm2% zwWbDHg>BRb#*-6Pr&9We{Z)({tZu(O0nhHDC)+*}pyDOgXTGFdP+|K5ii(&xe4lGz zK_h_ZMmwEZKMIN3Da6z*kkvSPn6ECZx+Z1qePe+V z@RQB$S?qkCn`Xdnb89^1e5afVYt}I3QO1wIhq4-2he=WmQrR9o4zW}f@G^=m1jt#S z+UnA?k+9A=(&I*Ga;P8336zg&tm~=Jo6#JorxIK-8a+V@!yG zlnIM^%O3qYzdz{*0&``e31XaLbIOTCov3htY$$|gRx^4SftL>@k4>CSk>1^*y&03R z`Nmzf)luzgo=_ULNn6SSL&-C{VBHzhLQpLlpBV8{7)qV{?aXr4-JdW=qEa5c8oSF= zws$Q^&#_N>imuc0DfaK!!x!V7`{6=kNms>^grqfl*z6i4WFTw z6PeldWlF$Ry6~r6l#5dW`k@@aL{Re2u$(i-E8rEslt*@EsqdXq@h28UNra~88ETGZ z0j1KqlSS57Ef^mg`(rjkjb#qK$36&H$9rS0K^gV?3wWn&#xJr-b!ScpX^4IQR=KlZ zoe%>^cQ1!xcVU~v)30tfUTS5{t3Oc!Tv^Xh_w;CFmow2ZeNF9Ilmis58P-(EiA@xO z9KEG8a>g%ihVwA_TXav6f>iH`$F3N#R(HdtnRW*3xJrH+x46BaG5Py`)-Rj%4dQI#Pb> z96KkyT{taw3wwQr^*W{yRNA4OP(%u2#*u*^WJLFG^fZL6K9;!$YzfT{zxR7Z zqDd1{MM>l1{^t?L63nxQ(2gku^B!}wf|T<{I8A}`DjTa}<9`(?quRXhuiE(b*Vnmp7v|5twAgJb-r~^V16_IC;V|=Xo`np^;74;4 zNS{Z#9>FBie@1srXDn#29C0mN2@S|pR_(ZKQ8MoK} z`eQ!qA<2J$dY8T@_ZXoms~ zP47Ie-ng_mCdCp_{Py*4~Fp`Kb%U zwng}cjOA_qB-0S=XB z=EoRs*(+?jp%4#AU55Vf8Z zMxkBZZX^-c6~8d{s+y@81VXK=^!SymyBit<9(m8Coftbz>#b z**&BWWe9-*F=6~Mq{iOMl)#zh_6n%y4(w-V{C4b)?zr`$OC9-_*1P8w8hZ!m;g5^K zXd9YtSHNgW?XY{vBXSSUHIwbe-P3oZEPUV^9@?Rxi9`momfmJF2Kgn(YVZ2|%ppWC z(2xI~5EE?|Ev(`zsoaj_joUlwJELn#fOe#<^5RlCsvWYul4xcQ>%i$r&LM4oaGv+_R;E@7rn7BEN`_ z;qK&QwX?KYhb&jdKE0p10F)0ID*$C|aWTJP=0n+GKw=(t;inN+5NUktYJTJ$)!5Gf zCle9Cyj@11)@_HW^tD>Nivhc&j8ot4IU}WLQnJac@KfPzN}sEHo-hxaYmpmc79TT9 z6LIm>*gIFTu@vFC@YpM9JtDyjD!G7?wIEf88yQPyFBUYixnu{P(U{bwe`bCkN?HG~ z8gN@6H;C!qn2#}S2CRi#gUGZENSL!H`iV%gN+DUJo5?CPi??Q8D6@q zKvD8E_hv`vWnf<->Cz??Kr?7S2mCK4i`lOtE}xwfAXlqI?Unzg(zXi`^$wFSS~E2l zluVp3%lYaQnK7K(Ebhlw78vEfbRPl7L-}UVkplLUM8_g-|+NR9}f1hgZ|SKchX! z(%|Sj!R1@2--r!o*JY|VGIOj4Bn(H%Ov44GimYv(zgF6wC3Ao0@B7Aol^eZ3abIVr z04#%oVCZtA>GtEN?7@(}K9R{zPxSSCS!A*s!U^qnhgE=ezgdl$GyRSY|JBC)eC`g> z1vX4KucUPDDTyGqMnh_^Zh=u#Z*MzV4C=uXG>^Y>#)&Ubu66od+xm+ql^;&8#|cJ^ zAn%e`BjH#`C}8}@tLMxLd-!YrUpVgyf5JOf39O9o@p{OQvQwa5yUA3^xpP(V&mS2? zUJr3C5k+k^N{|yf+I^V1Iv1DBxjstPL(Bn1)W#j?fLO4r2`3dmiqd1-2(i1)Sc_5) zSegd8dTO-t9aBE3B@dtm!@mB-`g0WBBlN^fX93k~<%H?*wcIsT3k$3-w{~p2{sewh zS|;}{37i8Z@-S4Zk-#PZQ)wFo=E=5Q9sg@-1|_x~Q!^mz>@P}yrO2*@qryir6x6EL zex#mr#7uJF&)x%Y&a`fX_0K)o24p@B1cS`BnLRJT`(^fq9@gQrZ4}pv^_}~q^4&Ff zO>RqYx1s^B2545Sm#nC+^Hym}9lCL4W_if4YIUwS&e0$UgbysA15&U^58*VPEEFxd zKezzKEzjEV*AUi9275Sm^dMfF)pA)yW$K_)s(Ndnio99h^-*mI?7t_Z zzf{G?aRukz=HSXL&oa*}w7%+j3g#FXfEpy_1_J2OA54e~s=U$8A_8T7XNVbFC3Y92 z^?=8sx9{eV^S1Xjk&5sV*Evro5`WB{9vTBjMP#-g!jSbYP;hu5=7CB_19nrHk{go? zB3Q-L{ftw+#J-Hb^f)eg6}5NyZ$tBq%KAW)tXxi>b53Zs}{7j#V~uFDH~1;v7o)MzK;4 zsC$kH9Ftm`nphQ}xlyd$^(3hoFug140@$gOBD`M^&__|+F?`M>)rrJZcaY5P;oGNt zn)GUaxjdcptyGVFJv=RoTTFz0I)5 z3ZY&jRlgUTbX_zfxNrnf)OFcO)@KadwLu!RdZf@d(y^BlRB)x#{fMLRmwM;5mv-O zu$AiFBSQ6)ZF|plp2ad^em6a3YJ|iDWKJ-4(T|FVHIbw-9*eAfpEz^xjFnMNgx*VF zhl^~hG%Qq2uus2N66jn$hLUGv*?C5yrPj<~in#pmm(*?wYrX+?w9me3z+V7N8SJ<& zIq^&xGFm&VyX=txwJ7W%Sswl}2$t}XMRWfBm!)2{BJ64CNL}&v92t9Fl$OER38Zk8 zmLn=31JNXCNn97tVEq^r6QOL+4Qf;pL%abRmgBpuVWaX2w2nl~jga6a@OIL+v#-=) z)U|c+$`?07k{(L!pRXpx=7jem<1(tZD_nXuTkE)8hszv|??zc)URsU5omHwZ@U!lz6vzAr_^d5-c8I-c)f?IG5$|GU*Gt7fmUTA*~1Exw?Q*&clbnU6(Z@JHfOA88f9 zp0M0HBqtem3b1JqgjrrA0=R*#C25@fsn;TEw#pCdzq+uv|9$qvvR|HlcsnPM>L=bS zNK}%aEh~KObG)$98B1%=>GU)1{0zppnsvsGi26xX?@ z!sWvf?d;q~U7${xG|(G488DpJXG^dF>Fsp(!?Q%=f=b(tY9uUE_zG7yZefjeKQ7n9 zhWV@vht$xJwZQ-D)!@&t^Z-z!aus>SLU0Un{3#SPHz*cKS1FsfH-BRNt@e!h2KZo^ zUf0I$3eXKjZj}E7syOPX`o$~sOLO;#@RYgR`8%EQL#gno)zaG!6$w$ACWsQcGxkK< z_5b&gpv^dkr`id-^rl)%T2<8^&gB-bv${dbeBXY1OoWqkMP5+%;fy15%WC@{oUwvK zg9xP8tR~OHjOxb5Mc_Pyqk`CJ;-{eyRk(<4;Ja|*=34t z@)lVRmb-E+d?X9a+_9b-6q&R3Xg9Azc|)lS=g$28xUNiu2n6~>2fatGZS%pFWP zD-5wZo1wu#_$t^qoLjmO%s5+fcLL~<4m|dCByL|cT8ORJB#sQruttXm5~*%r)wN1_ zH_CzrNKsNw>|@+|UTSR0bYgY$d)@u~nQd8HM3~$Yy$o5OUF+_wJ@|GkN7LUSl+b z7Rtqe6=h{tMEx=!=k6FhwEDsiX}S!so182&?1ECjRw>xes7q<*lzb5^ji4{b{5z|8Z{{sFzRDORPhG4`e%veV z^fK@6dyl@7cDb}b!lV!GbX}(%K+A`-oC|N;hwXqVRkjryCErTA2%>!KumhoKFVc+m zxi5&S)IOhm7ItH>Q7(}!uX868Ug{;}FR#mq=pnO zqJ!hM1A0+JVszi)slZM;03J|wVvVVW_K)Uk$Bzo^PxFCn6nWVcG^nx$cO;vj2{ zl{)u+M-WV(IZRxZw1V_2=)qQ@yeAqI?fzt0F~VM#Ce2IlGpaj*7GkJDKZWLJ@5x2_ zkLzTs1YxeZ%U{c-Ta1|fIy5v^8|&u3nI)WQ(2QclF{4PMa6q}0wg}Z_ATJaa9Ux-D z2*d~SKK0tjx0WgPxotv+IyUqcS;@O(u|w^rKvZ8Ctf#ew0T#MT`$tZreRNVq!{oU8tMk zzT1FXXxwKCo#zgH=OK^ygBUNt+@CRJkAO<^^wW^CVdS3#4v;=@Tq|vB+FO^@XcKE< z)n7|A3y^hbXO3Gu;q%o|yi^#KZ#3mkeeK&+Y^l}Jw>&-B8rEr**a5=VN-u{I)aT{y zw`j71`Hl8NQl~$dog>C;oSgZ^t9NYwic-55k4#1}1oDh2-JGGjpoU*8v)E(R-#B*)D@{mh^pmUrQ zk`p?oqvd&RM)$Lexxrc4mbfz`^VJW{Q)jbWh`I=G1`&PUx(2DVU&Z)A;Ve#I9x8UY z^zVa>|BeC1F>Z+sUbz%pR1}j-dZL&?$UtQnuJ?_wm$BaNwpiFt1yW^{#9;RV184wG z7Aj{Rk3$yhhFv1F0%Y9hM_<;#Mv>bO!wE|C?MZvKKKwePvxm`N1gEn3(%kFQ`;+It z1>7!|Z6zTV0oIGh-cjQp@*FkWrp)eV^RW4eMfJ!bO-_scn1&9`n2w9|9e}-(q{s8j z$6@Qwrh8QM@SD4`Kxzaz7~>raq-hU{k(G}jp5(jE$mn<~gk3+gfB6k-167(OO(K}0 zdMYsH|7lkPLm|cyE;?f`M07b`gP~jgQeK&AmGZSH;7}gWB!cXjkuz>IIhZ}a!->cg z+nMPy`IhXmxlf1uH2usEtk;mD(KIBkORUaW3=O>nKFUG%E57a`yaejLzRmnVsjedl zf~8?Y?h`py$Li;a4G|+u5Y?lE*ct9NTLUg0gWciId(W*O)u2?yZy%H1@dd6&jKNbC z4?~P}qu8n1v-ZE*+l_+jznf#8Akk?BehXrL&5B*Kzd)uAj4Nu61PzrasAXOm1<0 zPN8&&b;hX?j>C}ZkJ4NAobVX)rCP|HR_-WbPwDhJ$?$JXU1dKiAXnFfxRi4)8|x;p zRddVnN4npA`m!%^WkSE|VU`BI@8FE_ONhsdlFKxN0Zm#g2qR;XF`6AJKb`$Vdg9nR ztY-F}FP&*wIy`HjYHyXDn5)9~qupw+2+5hpIT5^UczTAZl;xS+(|wLVn?pZ;hsc=s zJ;p7bC;O-k1}~HQlzzr6sA%NeJu+=aVRCvrE>c6K-(H*=%6!Umr5cqPe{D@&9;Qd% zt`$!U%+-(3%v&KGM`tG!O-xCbtSJWvBuJX~M5rUY7rm#eHa_fM&3_DY!;|Ljr)HNz za-5|0a$dPdrCD3L$1~>Yv^;~lc80C;NM{m{#zKFfdk1)~{UunH{*smkvs>+cAw{90 z3c5UX5pzGos#%*BFK;Vn$dFJD+*ip7m8ThJ@8-N4T4!%_LQ<4>chuOFM8f!2J~q>f z>T`@m$xdgN+r;*kuu-!do;25v{f6u{j|7Gd4O?rCF3@{x5@N3y0~Y6kb%^fg9y9Dc5t3$y7C+MQNP`HaYP9E-R^QFRek`gaN#>tgT7uc^VdCwZhwZ7|D>S zaqIIXYg$adyLoN9AH@sF&hZxfj*;AGYsgLl2sI3=@Jka()iv9Vzy3%Zdb|3zo6b(m z%9SAL^_GO3m?2gOT!Kk7bb2(Z6UElb7T$!8gso$zfqCZ13Azoq>~?t}rvLaw*vJx1VUD+jZOz@XpqZSLY)#`X zlhySw7PO?Yfr?|clhlOx+yUO@$`VV|lvQEq)3(WZLj(CZ1Hu0lCwa3l?(C!N^6sZ} z;&B;%1R?)zpJC%}b{Ai`m!sclanmwDZDh-$a-mtFuNR z;u<1+Dk2@%2N*lhlalN!RErCL(0`5PUQUAVo10}9hNj2+^}D~6b$B!gV4IVDiXl~|@yt`7_`DIEmZ1s5DXL&It6{3vJLhqIT#X*>r&HI98ZM?c&)~lKp z*m=o>eYyxEb-KvH*^)RVl9O!M)t!i}9HCBql3^6Kqk#cZ#uM_tuSH{=V^>{G!@%qfeQq73-OrHVp z-zrF1BJmUt>}T&v{~6zeGt8??&ixeqi}J!IHXVdWHzX$v*aKvd0t=@oc%tES~8xTAC0@KXpw8%7D_iVK($hfQCdEY=F{Kq~mjRxq zD8(8sT-`FmsyYtJoYv_&2B>|=qgu}dbzaKgO>(7YkC5fVWX_6@$t*ThVkx7!VNF%3 zvjtqmfV_MX+##{1ZI5(Y#+vs*tx}W*VQHYxs}xKTaAvZ z__tH{Q4Hg8T{dL->dd50ehQ7+d$t>1WSuH5rX^Od9rB`iy9J3wD0HJ zQ_thAp-`obgn+_54vLe)X@HtxSPv+zeq<#xSJ6+gS#Bk9K)Yp1Dv&x!3LLS-1`Qy` z7qi)J&aaq6jG18##tC^?3({VRu-O6WnBg3F5BhXifh6TOM=tGT)C*KT-3 zaY_E;NGBC;uQ%(4P)rS-`sgE!OV8t9w5C<~E&%g$FB?nu!ObP_*rOQ|yO;jh2w(oN zSo-w#i2*F;?82{^%+03OfaO;?92UC4O-$7cGq>2FShgS~efADDPCNB$>FrUO*6<4v z13Y)3*`4aGOJ;M2xpq3*&X_$R3v<&7xox^sq)ae!&-mPfaPn=^d1lbvN}U)bCrMwc ztoV9J&S9s1)iG~ukRMrJCqZ$qtc-)NJf8IfO!o2`drlW)BJNa`EEg_i0xUxc$z3s) z%ng+fW=2XaN(IA@h6O~c3%u7{wbd!zd!$#*-9%ne-|1?1UKL-k@S?Y;t7qcu)4y87 zy4uR}1rZDY*4* zd%UqY!3yjOUzRIS%AB2fDxOiM_-dZ#P7bIV}UIi*pV(e$-Y%1Fk zo;R6iA=^d>2+0p=DkG@bnUj+`_!*3S@g8Yp59v%SjJouh!K=vc!eM-B7^`!fS^Jc zQHZF3MnP!hSso!^9pdApYbjPLLRtt3@{p8(q=FY8?TAQFAq)yAIe`Q&TKi>k<{h!K|gHwwo$n#|t7_s6iFW(bddw+E@ceIf%j- z(6Z-{JD7I_%-gGT>Ac3Jhu2v(pHBs)J{a(D^4GA`#gEHNg83RlE-iB?fbxd={4m)} zdGH*T@hGr*HQJnwM`Z9mt9jABY!7&Arp{oh3p3A7rl7NEx`smQ0!@4F#W_~9DGMu!Ao-1+9bsq(P zwU}z{N!XW5e>C{j98c=j*ubwe7|UOMOUxA51B|evTA)`^r8i+N^+qnDho+_5&5#bP zI-py*_#pirW?ncZeic~0*@cs@ABX^LWnFb>rO>l37ErmLJ~r35cKPOL_p3!?oE4skuV;NwV5&r%bLF-M-i zT%)rQ<$cH|W;Qz{l;tWt5-=Cg=eEOQ(#ozz4^5CeoKAV|u+%#b;~r|E2CGjvl~}0F z(T;PWFtWJ#o^~~GZ6le_-!CRON`pb6GldPQH^q$WU2ZASmwkQybSFmBiJH&qoz6@W z4SUZuJmj{k7_c}>)HA5=KI15P$u!!rjyhP*Y2&8 z-50hd*V2A>zWvtI-meiQ1 z2u0vOa1sNIvc!^=eP2OWd+fjbPxy(3JK(3Jdk+8<%BX{*Bp(YAaFs9(BI}UKa3n<# zB4iHJV`=uIfC}fZ$AK>K3MOW4CPX(Z&HH@#E;B#_+*Un7bI|zS7SM{Ty~wrIXGKp` z&lu@XFWq-ZU!4A|x;7EI3(RPr@H1z9u!Vg;0Yqg|VCmLDX%%nZ>45Xj(12Ix2t{ok zD|NfbTUy7~cU~U5>dRff_8uTL6&2<YR_*IB&uvf4L;Kobb;CIR48~yVHkDjOIaAgri6CJ0SH&?}_`x;+S_T|eA~4JyGzJK-<_WxKJyNt6 zhb}xTb2jUg!^pcK6L}1U%#KIYg3x49Tuk6So`TX>Rs8-O9N*CsFGZcMnM;wf_{iy_L~ zrNMRSZwuWAJ5&#C64*Ju&)f2yvs0yY&`rr+2SDHAr4RWI+h14v0oIW9zd72+M8CF$ zGETEp-^FwRI|Uxs;q=gwlHMVE%)ijAJ*u@NBKr?ST~Y`ypl`sEWKX5aZ8g}dzDQ#jk9ePQ^804#w?VWX4z&La@dbf}?gv?&!nQwgVqad! zf3B}PGSL8E1^xr;ND|j-U`Z*;L4cX9M=Bo>pMQLfIWGTp0e~g%0ba*7w^F*Jp@s)l zlrG=b9w5cV<5IZ7<0SBuCDinKKtVIaf9W4F>B<{pndGT}1EqGp-A-}Gs`=oy*L*PT zZe00^$$3Y|!C+vNeHoP7tlnBE-XTc665zQ6X=UZ8$U^vVZpJCtBluI+F@8qr?NGkW zq%C6^Glqhm#s6OCI8F6)dcL_iL;XdM@wN@Dk8*_B?mO%Vn(k*{Rmwz!z!JcjX=ePV zx?Jq(Mqc-_V(E8>6zFn@r(VY1(>SMz8u!q`ey@O!qPy%A_g|y6`R7Q%9)0aVmruiJ&u5*(N*_*flOyJhU9e3@^2vcXSv zWPW{Q>Z`Hiiyytn&dNa;2Bhy$R=L6hsEAPL|O%`gADW=VEK6Zc% zRZEb1It<*uGDJ_R-UiQr)Dafyt)5T2o{ks3DHT=_@!#lyZ zoj1sfg#7n~b?)pi6A(sZ0RdM!R` z$RBpHf~fUi$Y0A^LzX&lC5tv8v%R|@=a zvDD!PJE{`?H|8B$vQ;=f0z~O{F}!ysPD_};eUgIa3}H$6?^mxqhVXH3&D0kJ$8jO) zH{t}%JS}9?aHi4$aKTJ@OSK#B5GBVQ3iIqC>vv3r5|K*eb43PnEsrMLi5r%7;kqk% zmSL=|IeunII(zQV@YR}XQQ+li-><%JC3ly*cM(ooWaDdiv5)?u(OI(T`0xq#K za4cK|V2j-dOFS_!v$fB{=j?B9_TUPSIgh9G9Oi>+S@&u04WITo#!ON6zf(8CI3f#`WTHT670&%Y_$C-E&j>%2UpM@80wNaTTkDx?my* zek{8S_+6=&z;1BBPW&ypQIp9r-sFY4g)Qhu=GjCf5pro~Xb8xlR|@%Vi4qP+@3E-a1aMu06mxsf>shubL?*_@&UbBycRJplb_P)xSveL$tZ43es zeC@o5Uxbcaqv^&9eX;7VJlhT@brOK# zf+QEIM@le?{wKF#&slJSt4meGN8?mzkSI|KsGNmqqjmfY+E~_NaL4v#;oZFZZeBnY zGvKW3LSQw?uvb_FPYHth?avu z&%V3fbfe4}*0<m=&`xu5lGi zhQXgVYe#viY1H43n&i_9R6a|2?o+~S&5ontGC@BJwm7DtTb6RjUoNn0`ysFz#w%NO zRW3y7w}(!uuF*CkbCww+!QztLg^=+VCg1ES4(r1{XkMl5gpajsUjS+u~6%0 zZig)?JafouKdBw|pe$M_yE349Nd>8dG=CA@(OHsMUGZw*U3!|8qfPEAYA#A=v{4fH z5p~h22XoMk@XQ!-gzac&r@ukM1WMt2Owc+@q*`d;O#N@eKvT*LiRXiR@JL zsVKow!gE)cQPAe@!V8z=(2$T4_rcK{hI$81EDZ8oj5r?KLU-1+0ABorvHPSBvwu-! zz87kDK^ilm3<#D~{tygFl+8yQ9}ppnrK8JVz4Yr_)$_4I^D3~NSAkvvP6Fp9B5g)t zDA)^b+8ayrBv){;=LF1q4!6j2O1zs!xgH{)_YWaE4)~cIvsI>``7ccw(Bs-K z%U;ZD*{K@e4~Y}NDV0mMB`GNDY`V&tBoSrt8n!lXlkH&}@pb)S2wGkKH|2?H5>n(o z`%5oWL~k({^AfS;q2+VMo70vSi`>L_ZN*Sr&TW^6=q+A$8$UPzkFi~SLiraqr_<3= zBi{#QCFcSV60B_PGg76@wO_S9pag+p_P*@h;&2RjhN0}o8f)3Y#0f2SCr>rzL57*l zLhTKy9Lg+hB49tyk5`0E5l#zPEMFe;z=_GT^vuQwlU}9HR@lZ#7wj3D_u|XrWh8svahY)B-nS! z&Qz%`AC4F-^nqB_1lrIu$Q!A8Nw`2wGG_|jz7I>jxAeT%WYP&IkQr+sb4Z9M6cD1W zfks*$Z~!fsj?eDYLo<5ewCNk2BDS%GPbi#r=Fih#?wId;_) zB2LRHBpeZkD}wH&ALN1^>}IfIB@CP?aQqWXXIV&mV9u{3ri}I0zhR~o9DhkOc7-K% z7LD0@9g}`ZgeB!Ro9PlSa9WVKm&F4MTsSxU?l$CZa4m%Fzq4{57AeRkH6?z_LJ3*wak9)B1pT7XCLD(*EM3R_6d$buq9(I^)bxa%kyT;0K3D>M5%J z{~VDO@PQul;Dq`M+N0h%uFLNH|KT$~I90@8Rxri;y=bV;rN18z+5M z1lR1(PO$z>V`+QX?hdr1)$LF^3_v;8mCPItuy#)}%t>jBAKu<5Xn!f_S*ZTO2Zpdq zY>n$vXa?J$JL(}KDFo~oo~u))oX}gSIoii88se}`b2#X2EeJwB${(g#E)D8VynbOa z{)8tme~!oEhV){MuGU_|;E{5qUOi6OmBTDSLISD*R-AB?a9y81F~F;v&JeH_NLN)KUGLm;>!A;4gajfn6;- HGm8EfU=CoY literal 0 HcmV?d00001 diff --git a/ControlNeXt-SDXL-Training/examples/vidit_depth/train.sh b/ControlNeXt-SDXL-Training/examples/vidit_depth/train.sh new file mode 100644 index 0000000..af2c9a9 --- /dev/null +++ b/ControlNeXt-SDXL-Training/examples/vidit_depth/train.sh @@ -0,0 +1,19 @@ +accelerate launch train_controlnext.py --pretrained_model_name_or_path "stabilityai/stable-diffusion-xl-base-1.0" \ +--pretrained_vae_model_name_or_path "madebyollin/sdxl-vae-fp16-fix" \ +--variant fp16 \ +--use_safetensors \ +--output_dir "train/example" \ +--logging_dir "logs" \ +--resolution 1024 \ +--gradient_checkpointing \ +--set_grads_to_none \ +--proportion_empty_prompts 0.2 \ +--controlnet_scale_factor 1.0 \ +--mixed_precision fp16 \ +--enable_xformers_memory_efficient_attention \ +--dataset_name "Nahrawy/VIDIT-Depth-ControlNet" \ +--image_column "image" \ +--conditioning_image_column "depth_map" \ +--caption_column "caption" \ +--validation_prompt "a stone tower on a rocky island" \ +--validation_image "examples/vidit_depth/condition_0.png" \ No newline at end of file diff --git a/ControlNeXt-SDXL-Training/models/controlnet.py b/ControlNeXt-SDXL-Training/models/controlnet.py new file mode 100644 index 0000000..9a505a9 --- /dev/null +++ b/ControlNeXt-SDXL-Training/models/controlnet.py @@ -0,0 +1,495 @@ +# Copyright 2023 The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from dataclasses import dataclass +from typing import Any, Dict, List, Optional, Tuple, Union + +import torch +from torch import nn + +from diffusers.configuration_utils import ConfigMixin, register_to_config +from diffusers.utils import BaseOutput, logging +from diffusers.models.embeddings import TimestepEmbedding, Timesteps +from diffusers.models.modeling_utils import ModelMixin +from diffusers.models.resnet import Downsample2D, ResnetBlock2D +from einops import rearrange + + +logger = logging.get_logger(__name__) # pylint: disable=invalid-name + + +@dataclass +class ControlNetOutput(BaseOutput): + """ + The output of [`ControlNetModel`]. + + Args: + down_block_res_samples (`tuple[torch.Tensor]`): + A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should + be of shape `(batch_size, channel * resolution, height //resolution, width // resolution)`. Output can be + used to condition the original UNet's downsampling activations. + mid_down_block_re_sample (`torch.Tensor`): + The activation of the midde block (the lowest sample resolution). Each tensor should be of shape + `(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution)`. + Output can be used to condition the original UNet's middle block activation. + """ + + down_block_res_samples: Tuple[torch.Tensor] + mid_block_res_sample: torch.Tensor + + +class Block2D(nn.Module): + def __init__( + self, + in_channels: int, + out_channels: int, + temb_channels: int, + dropout: float = 0.0, + num_layers: int = 1, + resnet_eps: float = 1e-6, + resnet_time_scale_shift: str = "default", + resnet_act_fn: str = "swish", + resnet_groups: int = 32, + resnet_pre_norm: bool = True, + output_scale_factor: float = 1.0, + add_downsample: bool = True, + downsample_padding: int = 1, + ): + super().__init__() + resnets = [] + + for i in range(num_layers): + in_channels = in_channels if i == 0 else out_channels + resnets.append( + ResnetBlock2D( + in_channels=in_channels, + out_channels=out_channels, + temb_channels=temb_channels, + eps=resnet_eps, + groups=resnet_groups, + dropout=dropout, + time_embedding_norm=resnet_time_scale_shift, + non_linearity=resnet_act_fn, + output_scale_factor=output_scale_factor, + pre_norm=resnet_pre_norm, + ) + ) + + self.resnets = nn.ModuleList(resnets) + + if add_downsample: + self.downsamplers = nn.ModuleList( + [ + Downsample2D( + out_channels, + use_conv=True, + out_channels=out_channels, + padding=downsample_padding, + name="op", + ) + ] + ) + else: + self.downsamplers = None + + self.gradient_checkpointing = False + + def forward( + self, + hidden_states: torch.FloatTensor, + temb: Optional[torch.FloatTensor] = None, + ) -> Union[torch.FloatTensor, Tuple[torch.FloatTensor, ...]]: + output_states = () + + for resnet in zip(self.resnets): + hidden_states = resnet(hidden_states, temb) + output_states += (hidden_states,) + + if self.downsamplers is not None: + for downsampler in self.downsamplers: + hidden_states = downsampler(hidden_states) + + output_states += (hidden_states,) + + return hidden_states, output_states + + +class IdentityModule(nn.Module): + def __init__(self): + super(IdentityModule, self).__init__() + + def forward(self, *args): + if len(args) > 0: + return args[0] + else: + return None + + +class BasicBlock(nn.Module): + def __init__(self, + in_channels: int, + out_channels: Optional[int] = None, + stride=1, + conv_shortcut: bool = False, + dropout: float = 0.0, + temb_channels: int = 512, + groups: int = 32, + groups_out: Optional[int] = None, + pre_norm: bool = True, + eps: float = 1e-6, + non_linearity: str = "swish", + skip_time_act: bool = False, + time_embedding_norm: str = "default", # default, scale_shift, ada_group, spatial + kernel: Optional[torch.FloatTensor] = None, + output_scale_factor: float = 1.0, + use_in_shortcut: Optional[bool] = None, + up: bool = False, + down: bool = False, + conv_shortcut_bias: bool = True, + conv_2d_out_channels: Optional[int] = None,): + super(BasicBlock, self).__init__() + self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False) + self.bn1 = nn.BatchNorm2d(out_channels) + self.relu = nn.ReLU(inplace=True) + self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False) + self.bn2 = nn.BatchNorm2d(out_channels) + + self.downsample = None + if stride != 1 or in_channels != out_channels: + self.downsample = nn.Sequential( + nn.Conv2d(in_channels, + out_channels, + kernel_size=3 if stride != 1 else 1, + stride=stride, + padding=1 if stride != 1 else 0, + bias=False), + nn.BatchNorm2d(out_channels) + ) + + def forward(self, x, *args): + residual = x + out = self.conv1(x) + out = self.bn1(out) + out = self.relu(out) + + out = self.conv2(out) + out = self.bn2(out) + + if self.downsample is not None: + residual = self.downsample(x) + + out += residual + out = self.relu(out) + + return out + + +class Block2D(nn.Module): + def __init__( + self, + in_channels: int, + out_channels: int, + temb_channels: int, + dropout: float = 0.0, + num_layers: int = 1, + resnet_eps: float = 1e-6, + resnet_time_scale_shift: str = "default", + resnet_act_fn: str = "swish", + resnet_groups: int = 32, + resnet_pre_norm: bool = True, + output_scale_factor: float = 1.0, + add_downsample: bool = True, + downsample_padding: int = 1, + ): + super().__init__() + resnets = [] + + for i in range(num_layers): + # in_channels = in_channels if i == 0 else out_channels + resnets.append( + # ResnetBlock2D( + # in_channels=in_channels, + # out_channels=out_channels, + # temb_channels=temb_channels, + # eps=resnet_eps, + # groups=resnet_groups, + # dropout=dropout, + # time_embedding_norm=resnet_time_scale_shift, + # non_linearity=resnet_act_fn, + # output_scale_factor=output_scale_factor, + # pre_norm=resnet_pre_norm, + BasicBlock( + in_channels=in_channels, + out_channels=out_channels, + temb_channels=temb_channels, + eps=resnet_eps, + groups=resnet_groups, + dropout=dropout, + time_embedding_norm=resnet_time_scale_shift, + non_linearity=resnet_act_fn, + output_scale_factor=output_scale_factor, + pre_norm=resnet_pre_norm, + ) if i == num_layers - 1 else \ + IdentityModule() + ) + + self.resnets = nn.ModuleList(resnets) + + if add_downsample: + self.downsamplers = nn.ModuleList( + [ + # Downsample2D( + # out_channels, + # use_conv=True, + # out_channels=out_channels, + # padding=downsample_padding, + # name="op", + # ) + BasicBlock( + in_channels=out_channels, + out_channels=out_channels, + temb_channels=temb_channels, + stride=2, + eps=resnet_eps, + groups=resnet_groups, + dropout=dropout, + time_embedding_norm=resnet_time_scale_shift, + non_linearity=resnet_act_fn, + output_scale_factor=output_scale_factor, + pre_norm=resnet_pre_norm, + ) + ] + ) + else: + self.downsamplers = None + + self.gradient_checkpointing = False + + def forward( + self, + hidden_states: torch.FloatTensor, + temb: Optional[torch.FloatTensor] = None, + ) -> Union[torch.FloatTensor, Tuple[torch.FloatTensor, ...]]: + output_states = () + + for resnet in self.resnets: + hidden_states = resnet(hidden_states, temb) + output_states += (hidden_states,) + + if self.downsamplers is not None: + for downsampler in self.downsamplers: + hidden_states = downsampler(hidden_states) + + output_states += (hidden_states,) + + return hidden_states, output_states + + +class ControlProject(nn.Module): + def __init__(self, num_channels, scale=8, is_empty=False) -> None: + super().__init__() + assert scale and scale & (scale - 1) == 0 + self.is_empty = is_empty + self.scale = scale + if not is_empty: + if scale > 1: + self.down_scale = nn.AvgPool2d(scale, scale) + else: + self.down_scale = nn.Identity() + self.out = nn.Conv2d(num_channels, num_channels, kernel_size=1, stride=1, bias=False) + for p in self.out.parameters(): + nn.init.zeros_(p) + + def forward( + self, + hidden_states: torch.FloatTensor): + if self.is_empty: + shape = list(hidden_states.shape) + shape[-2] = shape[-2] // self.scale + shape[-1] = shape[-1] // self.scale + return torch.zeros(shape).to(hidden_states) + + if len(hidden_states.shape) == 5: + B, F, C, H, W = hidden_states.shape + hidden_states = rearrange(hidden_states, "B F C H W -> (B F) C H W") + hidden_states = self.down_scale(hidden_states) + hidden_states = self.out(hidden_states) + hidden_states = rearrange(hidden_states, "(B F) C H W -> B F C H W", F=F) + else: + hidden_states = self.down_scale(hidden_states) + hidden_states = self.out(hidden_states) + return hidden_states + + +class ControlNetModel(ModelMixin, ConfigMixin): + + _supports_gradient_checkpointing = True + + @register_to_config + def __init__( + self, + in_channels: List[int] = [128, 128], + out_channels: List[int] = [128, 256], + groups: List[int] = [4, 8], + time_embed_dim: int = 256, + final_out_channels: int = 320, + ): + super().__init__() + + self.time_proj = Timesteps(128, True, downscale_freq_shift=0) + self.time_embedding = TimestepEmbedding(128, time_embed_dim) + + self.embedding = nn.Sequential( + nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1), + nn.GroupNorm(2, 64), + nn.ReLU(), + nn.Conv2d(64, 64, kernel_size=3, padding=1), + nn.GroupNorm(2, 64), + nn.ReLU(), + nn.Conv2d(64, 128, kernel_size=3, padding=1), + nn.GroupNorm(2, 128), + nn.ReLU(), + ) + + self.down_res = nn.ModuleList() + self.down_sample = nn.ModuleList() + for i in range(len(in_channels)): + self.down_res.append( + ResnetBlock2D( + in_channels=in_channels[i], + out_channels=out_channels[i], + temb_channels=time_embed_dim, + groups=groups[i] + ), + ) + self.down_sample.append( + Downsample2D( + out_channels[i], + use_conv=True, + out_channels=out_channels[i], + padding=1, + name="op", + ) + ) + + self.mid_convs = nn.ModuleList() + self.mid_convs.append(nn.Sequential( + nn.Conv2d( + in_channels=out_channels[-1], + out_channels=out_channels[-1], + kernel_size=3, + stride=1, + padding=1 + ), + nn.ReLU(), + nn.GroupNorm(8, out_channels[-1]), + nn.Conv2d( + in_channels=out_channels[-1], + out_channels=out_channels[-1], + kernel_size=3, + stride=1, + padding=1 + ), + nn.GroupNorm(8, out_channels[-1]), + )) + self.mid_convs.append( + nn.Conv2d( + in_channels=out_channels[-1], + out_channels=final_out_channels, + kernel_size=1, + stride=1, + )) + self.scale = 1.0 # nn.Parameter(torch.tensor(1.)) + + def _set_gradient_checkpointing(self, module, value=False): + if hasattr(module, "gradient_checkpointing"): + module.gradient_checkpointing = value + + # Copied from diffusers.models.unet_3d_condition.UNet3DConditionModel.enable_forward_chunking + def enable_forward_chunking(self, chunk_size: Optional[int] = None, dim: int = 0) -> None: + """ + Sets the attention processor to use [feed forward + chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers). + + Parameters: + chunk_size (`int`, *optional*): + The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually + over each tensor of dim=`dim`. + dim (`int`, *optional*, defaults to `0`): + The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) + or dim=1 (sequence length). + """ + if dim not in [0, 1]: + raise ValueError(f"Make sure to set `dim` to either 0 or 1, not {dim}") + + # By default chunk size is 1 + chunk_size = chunk_size or 1 + + def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): + if hasattr(module, "set_chunk_feed_forward"): + module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) + + for child in module.children(): + fn_recursive_feed_forward(child, chunk_size, dim) + + for module in self.children(): + fn_recursive_feed_forward(module, chunk_size, dim) + + def forward( + self, + sample: torch.FloatTensor, + timestep: Union[torch.Tensor, float, int], + ) -> Union[ControlNetOutput, Tuple]: + + timesteps = timestep + if not torch.is_tensor(timesteps): + # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can + # This would be a good case for the `match` statement (Python 3.10+) + is_mps = sample.device.type == "mps" + if isinstance(timestep, float): + dtype = torch.float32 if is_mps else torch.float64 + else: + dtype = torch.int32 if is_mps else torch.int64 + timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) + elif len(timesteps.shape) == 0: + timesteps = timesteps[None].to(sample.device) + + # broadcast to batch dimension in a way that's compatible with ONNX/Core ML + batch_size = sample.shape[0] + timesteps = timesteps.expand(batch_size) + t_emb = self.time_proj(timesteps) + # `Timesteps` does not contain any weights and will always return f32 tensors + # but time_embedding might actually be running in fp16. so we need to cast here. + # there might be better ways to encapsulate this. + t_emb = t_emb.to(dtype=sample.dtype) + emb_batch = self.time_embedding(t_emb) + + # Repeat the embeddings num_video_frames times + # emb: [batch, channels] -> [batch * frames, channels] + emb = emb_batch + sample = self.embedding(sample) + for res, downsample in zip(self.down_res, self.down_sample): + sample = res(sample, emb) + sample = downsample(sample, emb) + sample = self.mid_convs[0](sample) + sample + sample = self.mid_convs[1](sample) + return { + 'out': sample, + 'scale': self.scale, + } + + +def zero_module(module): + for p in module.parameters(): + nn.init.zeros_(p) + return module diff --git a/ControlNeXt-SDXL-Training/models/unet.py b/ControlNeXt-SDXL-Training/models/unet.py new file mode 100644 index 0000000..fcd5a7a --- /dev/null +++ b/ControlNeXt-SDXL-Training/models/unet.py @@ -0,0 +1,1387 @@ +# Copyright 2024 The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from dataclasses import dataclass +from typing import Any, Dict, List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint + +from diffusers.configuration_utils import ConfigMixin, register_to_config +from diffusers.loaders import PeftAdapterMixin, UNet2DConditionLoadersMixin +from diffusers.loaders.single_file_model import FromOriginalModelMixin +from diffusers.utils import USE_PEFT_BACKEND, BaseOutput, deprecate, logging, scale_lora_layers, unscale_lora_layers +from diffusers.models.activations import get_activation +from diffusers.models.attention_processor import ( + ADDED_KV_ATTENTION_PROCESSORS, + CROSS_ATTENTION_PROCESSORS, + Attention, + AttentionProcessor, + AttnAddedKVProcessor, + AttnProcessor, +) +from diffusers.models.embeddings import ( + GaussianFourierProjection, + GLIGENTextBoundingboxProjection, + ImageHintTimeEmbedding, + ImageProjection, + ImageTimeEmbedding, + TextImageProjection, + TextImageTimeEmbedding, + TextTimeEmbedding, + TimestepEmbedding, + Timesteps, +) +from diffusers.models.modeling_utils import ModelMixin +from diffusers.models.unets.unet_2d_blocks import ( + get_down_block, + get_mid_block, + get_up_block, +) + + +logger = logging.get_logger(__name__) # pylint: disable=invalid-name + +UNET_CONFIG = { + "_class_name": "UNet2DConditionModel", + "_diffusers_version": "0.19.0.dev0", + "act_fn": "silu", + "addition_embed_type": "text_time", + "addition_embed_type_num_heads": 64, + "addition_time_embed_dim": 256, + "attention_head_dim": [ + 5, + 10, + 20 + ], + "block_out_channels": [ + 320, + 640, + 1280 + ], + "center_input_sample": False, + "class_embed_type": None, + "class_embeddings_concat": False, + "conv_in_kernel": 3, + "conv_out_kernel": 3, + "cross_attention_dim": 2048, + "cross_attention_norm": None, + "down_block_types": [ + "DownBlock2D", + "CrossAttnDownBlock2D", + "CrossAttnDownBlock2D" + ], + "downsample_padding": 1, + "dual_cross_attention": False, + "encoder_hid_dim": None, + "encoder_hid_dim_type": None, + "flip_sin_to_cos": True, + "freq_shift": 0, + "in_channels": 4, + "layers_per_block": 2, + "mid_block_only_cross_attention": None, + "mid_block_scale_factor": 1, + "mid_block_type": "UNetMidBlock2DCrossAttn", + "norm_eps": 1e-05, + "norm_num_groups": 32, + "num_attention_heads": None, + "num_class_embeds": None, + "only_cross_attention": False, + "out_channels": 4, + "projection_class_embeddings_input_dim": 2816, + "resnet_out_scale_factor": 1.0, + "resnet_skip_time_act": False, + "resnet_time_scale_shift": "default", + "sample_size": 128, + "time_cond_proj_dim": None, + "time_embedding_act_fn": None, + "time_embedding_dim": None, + "time_embedding_type": "positional", + "timestep_post_act": None, + "transformer_layers_per_block": [ + 1, + 2, + 10 + ], + "up_block_types": [ + "CrossAttnUpBlock2D", + "CrossAttnUpBlock2D", + "UpBlock2D" + ], + "upcast_attention": None, + "use_linear_projection": True +} + + +@dataclass +class UNet2DConditionOutput(BaseOutput): + """ + The output of [`UNet2DConditionModel`]. + + Args: + sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`): + The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model. + """ + + sample: torch.Tensor = None + + +class UNet2DConditionModel( + ModelMixin, ConfigMixin, FromOriginalModelMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin +): + r""" + A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample + shaped output. + + This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented + for all models (such as downloading or saving). + + Parameters: + sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): + Height and width of input/output sample. + in_channels (`int`, *optional*, defaults to 4): Number of channels in the input sample. + out_channels (`int`, *optional*, defaults to 4): Number of channels in the output. + center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. + flip_sin_to_cos (`bool`, *optional*, defaults to `True`): + Whether to flip the sin to cos in the time embedding. + freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. + down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): + The tuple of downsample blocks to use. + mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`): + Block type for middle of UNet, it can be one of `UNetMidBlock2DCrossAttn`, `UNetMidBlock2D`, or + `UNetMidBlock2DSimpleCrossAttn`. If `None`, the mid block layer is skipped. + up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`): + The tuple of upsample blocks to use. + only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`): + Whether to include self-attention in the basic transformer blocks, see + [`~models.attention.BasicTransformerBlock`]. + block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): + The tuple of output channels for each block. + layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. + downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. + mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. + dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. + act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. + norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. + If `None`, normalization and activation layers is skipped in post-processing. + norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. + cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280): + The dimension of the cross attention features. + transformer_layers_per_block (`int`, `Tuple[int]`, or `Tuple[Tuple]` , *optional*, defaults to 1): + The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`]. Only relevant for + [`~models.unets.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unets.unet_2d_blocks.CrossAttnUpBlock2D`], + [`~models.unets.unet_2d_blocks.UNetMidBlock2DCrossAttn`]. + reverse_transformer_layers_per_block : (`Tuple[Tuple]`, *optional*, defaults to None): + The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`], in the upsampling + blocks of the U-Net. Only relevant if `transformer_layers_per_block` is of type `Tuple[Tuple]` and for + [`~models.unets.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unets.unet_2d_blocks.CrossAttnUpBlock2D`], + [`~models.unets.unet_2d_blocks.UNetMidBlock2DCrossAttn`]. + encoder_hid_dim (`int`, *optional*, defaults to None): + If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim` + dimension to `cross_attention_dim`. + encoder_hid_dim_type (`str`, *optional*, defaults to `None`): + If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text + embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`. + attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. + num_attention_heads (`int`, *optional*): + The number of attention heads. If not defined, defaults to `attention_head_dim` + resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config + for ResNet blocks (see [`~models.resnet.ResnetBlock2D`]). Choose from `default` or `scale_shift`. + class_embed_type (`str`, *optional*, defaults to `None`): + The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`, + `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`. + addition_embed_type (`str`, *optional*, defaults to `None`): + Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or + "text". "text" will use the `TextTimeEmbedding` layer. + addition_time_embed_dim: (`int`, *optional*, defaults to `None`): + Dimension for the timestep embeddings. + num_class_embeds (`int`, *optional*, defaults to `None`): + Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing + class conditioning with `class_embed_type` equal to `None`. + time_embedding_type (`str`, *optional*, defaults to `positional`): + The type of position embedding to use for timesteps. Choose from `positional` or `fourier`. + time_embedding_dim (`int`, *optional*, defaults to `None`): + An optional override for the dimension of the projected time embedding. + time_embedding_act_fn (`str`, *optional*, defaults to `None`): + Optional activation function to use only once on the time embeddings before they are passed to the rest of + the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`. + timestep_post_act (`str`, *optional*, defaults to `None`): + The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`. + time_cond_proj_dim (`int`, *optional*, defaults to `None`): + The dimension of `cond_proj` layer in the timestep embedding. + conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer. + conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer. + projection_class_embeddings_input_dim (`int`, *optional*): The dimension of the `class_labels` input when + `class_embed_type="projection"`. Required when `class_embed_type="projection"`. + class_embeddings_concat (`bool`, *optional*, defaults to `False`): Whether to concatenate the time + embeddings with the class embeddings. + mid_block_only_cross_attention (`bool`, *optional*, defaults to `None`): + Whether to use cross attention with the mid block when using the `UNetMidBlock2DSimpleCrossAttn`. If + `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the + `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False` + otherwise. + """ + + _supports_gradient_checkpointing = True + _no_split_modules = ["BasicTransformerBlock", "ResnetBlock2D", "CrossAttnUpBlock2D"] + + @register_to_config + def __init__( + self, + sample_size: Optional[int] = None, + in_channels: int = 4, + out_channels: int = 4, + center_input_sample: bool = False, + flip_sin_to_cos: bool = True, + freq_shift: int = 0, + down_block_types: Tuple[str] = ( + "CrossAttnDownBlock2D", + "CrossAttnDownBlock2D", + "CrossAttnDownBlock2D", + "DownBlock2D", + ), + mid_block_type: Optional[str] = "UNetMidBlock2DCrossAttn", + up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"), + only_cross_attention: Union[bool, Tuple[bool]] = False, + block_out_channels: Tuple[int] = (320, 640, 1280, 1280), + layers_per_block: Union[int, Tuple[int]] = 2, + downsample_padding: int = 1, + mid_block_scale_factor: float = 1, + dropout: float = 0.0, + act_fn: str = "silu", + norm_num_groups: Optional[int] = 32, + norm_eps: float = 1e-5, + cross_attention_dim: Union[int, Tuple[int]] = 1280, + transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple]] = 1, + reverse_transformer_layers_per_block: Optional[Tuple[Tuple[int]]] = None, + encoder_hid_dim: Optional[int] = None, + encoder_hid_dim_type: Optional[str] = None, + attention_head_dim: Union[int, Tuple[int]] = 8, + num_attention_heads: Optional[Union[int, Tuple[int]]] = None, + dual_cross_attention: bool = False, + use_linear_projection: bool = False, + class_embed_type: Optional[str] = None, + addition_embed_type: Optional[str] = None, + addition_time_embed_dim: Optional[int] = None, + num_class_embeds: Optional[int] = None, + upcast_attention: bool = False, + resnet_time_scale_shift: str = "default", + resnet_skip_time_act: bool = False, + resnet_out_scale_factor: float = 1.0, + time_embedding_type: str = "positional", + time_embedding_dim: Optional[int] = None, + time_embedding_act_fn: Optional[str] = None, + timestep_post_act: Optional[str] = None, + time_cond_proj_dim: Optional[int] = None, + conv_in_kernel: int = 3, + conv_out_kernel: int = 3, + projection_class_embeddings_input_dim: Optional[int] = None, + attention_type: str = "default", + class_embeddings_concat: bool = False, + mid_block_only_cross_attention: Optional[bool] = None, + cross_attention_norm: Optional[str] = None, + addition_embed_type_num_heads: int = 64, + ): + super().__init__() + + self.sample_size = sample_size + + if num_attention_heads is not None: + raise ValueError( + "At the moment it is not possible to define the number of attention heads via `num_attention_heads` because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing `num_attention_heads` will only be supported in diffusers v0.19." + ) + + # If `num_attention_heads` is not defined (which is the case for most models) + # it will default to `attention_head_dim`. This looks weird upon first reading it and it is. + # The reason for this behavior is to correct for incorrectly named variables that were introduced + # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 + # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking + # which is why we correct for the naming here. + num_attention_heads = num_attention_heads or attention_head_dim + + # Check inputs + self._check_config( + down_block_types=down_block_types, + up_block_types=up_block_types, + only_cross_attention=only_cross_attention, + block_out_channels=block_out_channels, + layers_per_block=layers_per_block, + cross_attention_dim=cross_attention_dim, + transformer_layers_per_block=transformer_layers_per_block, + reverse_transformer_layers_per_block=reverse_transformer_layers_per_block, + attention_head_dim=attention_head_dim, + num_attention_heads=num_attention_heads, + ) + + # input + conv_in_padding = (conv_in_kernel - 1) // 2 + self.conv_in = nn.Conv2d( + in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding + ) + + # time + time_embed_dim, timestep_input_dim = self._set_time_proj( + time_embedding_type, + block_out_channels=block_out_channels, + flip_sin_to_cos=flip_sin_to_cos, + freq_shift=freq_shift, + time_embedding_dim=time_embedding_dim, + ) + + self.time_embedding = TimestepEmbedding( + timestep_input_dim, + time_embed_dim, + act_fn=act_fn, + post_act_fn=timestep_post_act, + cond_proj_dim=time_cond_proj_dim, + ) + + self._set_encoder_hid_proj( + encoder_hid_dim_type, + cross_attention_dim=cross_attention_dim, + encoder_hid_dim=encoder_hid_dim, + ) + + # class embedding + self._set_class_embedding( + class_embed_type, + act_fn=act_fn, + num_class_embeds=num_class_embeds, + projection_class_embeddings_input_dim=projection_class_embeddings_input_dim, + time_embed_dim=time_embed_dim, + timestep_input_dim=timestep_input_dim, + ) + + self._set_add_embedding( + addition_embed_type, + addition_embed_type_num_heads=addition_embed_type_num_heads, + addition_time_embed_dim=addition_time_embed_dim, + cross_attention_dim=cross_attention_dim, + encoder_hid_dim=encoder_hid_dim, + flip_sin_to_cos=flip_sin_to_cos, + freq_shift=freq_shift, + projection_class_embeddings_input_dim=projection_class_embeddings_input_dim, + time_embed_dim=time_embed_dim, + ) + + if time_embedding_act_fn is None: + self.time_embed_act = None + else: + self.time_embed_act = get_activation(time_embedding_act_fn) + + self.down_blocks = nn.ModuleList([]) + self.up_blocks = nn.ModuleList([]) + + if isinstance(only_cross_attention, bool): + if mid_block_only_cross_attention is None: + mid_block_only_cross_attention = only_cross_attention + + only_cross_attention = [only_cross_attention] * len(down_block_types) + + if mid_block_only_cross_attention is None: + mid_block_only_cross_attention = False + + if isinstance(num_attention_heads, int): + num_attention_heads = (num_attention_heads,) * len(down_block_types) + + if isinstance(attention_head_dim, int): + attention_head_dim = (attention_head_dim,) * len(down_block_types) + + if isinstance(cross_attention_dim, int): + cross_attention_dim = (cross_attention_dim,) * len(down_block_types) + + if isinstance(layers_per_block, int): + layers_per_block = [layers_per_block] * len(down_block_types) + + if isinstance(transformer_layers_per_block, int): + transformer_layers_per_block = [transformer_layers_per_block] * len(down_block_types) + + if class_embeddings_concat: + # The time embeddings are concatenated with the class embeddings. The dimension of the + # time embeddings passed to the down, middle, and up blocks is twice the dimension of the + # regular time embeddings + blocks_time_embed_dim = time_embed_dim * 2 + else: + blocks_time_embed_dim = time_embed_dim + + # down + output_channel = block_out_channels[0] + for i, down_block_type in enumerate(down_block_types): + input_channel = output_channel + output_channel = block_out_channels[i] + is_final_block = i == len(block_out_channels) - 1 + + down_block = get_down_block( + down_block_type, + num_layers=layers_per_block[i], + transformer_layers_per_block=transformer_layers_per_block[i], + in_channels=input_channel, + out_channels=output_channel, + temb_channels=blocks_time_embed_dim, + add_downsample=not is_final_block, + resnet_eps=norm_eps, + resnet_act_fn=act_fn, + resnet_groups=norm_num_groups, + cross_attention_dim=cross_attention_dim[i], + num_attention_heads=num_attention_heads[i], + downsample_padding=downsample_padding, + dual_cross_attention=dual_cross_attention, + use_linear_projection=use_linear_projection, + only_cross_attention=only_cross_attention[i], + upcast_attention=upcast_attention, + resnet_time_scale_shift=resnet_time_scale_shift, + attention_type=attention_type, + resnet_skip_time_act=resnet_skip_time_act, + resnet_out_scale_factor=resnet_out_scale_factor, + cross_attention_norm=cross_attention_norm, + attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel, + dropout=dropout, + ) + self.down_blocks.append(down_block) + + # mid + self.mid_block = get_mid_block( + mid_block_type, + temb_channels=blocks_time_embed_dim, + in_channels=block_out_channels[-1], + resnet_eps=norm_eps, + resnet_act_fn=act_fn, + resnet_groups=norm_num_groups, + output_scale_factor=mid_block_scale_factor, + transformer_layers_per_block=transformer_layers_per_block[-1], + num_attention_heads=num_attention_heads[-1], + cross_attention_dim=cross_attention_dim[-1], + dual_cross_attention=dual_cross_attention, + use_linear_projection=use_linear_projection, + mid_block_only_cross_attention=mid_block_only_cross_attention, + upcast_attention=upcast_attention, + resnet_time_scale_shift=resnet_time_scale_shift, + attention_type=attention_type, + resnet_skip_time_act=resnet_skip_time_act, + cross_attention_norm=cross_attention_norm, + attention_head_dim=attention_head_dim[-1], + dropout=dropout, + ) + + # count how many layers upsample the images + self.num_upsamplers = 0 + + # up + reversed_block_out_channels = list(reversed(block_out_channels)) + reversed_num_attention_heads = list(reversed(num_attention_heads)) + reversed_layers_per_block = list(reversed(layers_per_block)) + reversed_cross_attention_dim = list(reversed(cross_attention_dim)) + reversed_transformer_layers_per_block = ( + list(reversed(transformer_layers_per_block)) + if reverse_transformer_layers_per_block is None + else reverse_transformer_layers_per_block + ) + only_cross_attention = list(reversed(only_cross_attention)) + + output_channel = reversed_block_out_channels[0] + for i, up_block_type in enumerate(up_block_types): + is_final_block = i == len(block_out_channels) - 1 + + prev_output_channel = output_channel + output_channel = reversed_block_out_channels[i] + input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] + + # add upsample block for all BUT final layer + if not is_final_block: + add_upsample = True + self.num_upsamplers += 1 + else: + add_upsample = False + + up_block = get_up_block( + up_block_type, + num_layers=reversed_layers_per_block[i] + 1, + transformer_layers_per_block=reversed_transformer_layers_per_block[i], + in_channels=input_channel, + out_channels=output_channel, + prev_output_channel=prev_output_channel, + temb_channels=blocks_time_embed_dim, + add_upsample=add_upsample, + resnet_eps=norm_eps, + resnet_act_fn=act_fn, + resolution_idx=i, + resnet_groups=norm_num_groups, + cross_attention_dim=reversed_cross_attention_dim[i], + num_attention_heads=reversed_num_attention_heads[i], + dual_cross_attention=dual_cross_attention, + use_linear_projection=use_linear_projection, + only_cross_attention=only_cross_attention[i], + upcast_attention=upcast_attention, + resnet_time_scale_shift=resnet_time_scale_shift, + attention_type=attention_type, + resnet_skip_time_act=resnet_skip_time_act, + resnet_out_scale_factor=resnet_out_scale_factor, + cross_attention_norm=cross_attention_norm, + attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel, + dropout=dropout, + ) + self.up_blocks.append(up_block) + prev_output_channel = output_channel + + # out + if norm_num_groups is not None: + self.conv_norm_out = nn.GroupNorm( + num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps + ) + + self.conv_act = get_activation(act_fn) + + else: + self.conv_norm_out = None + self.conv_act = None + + conv_out_padding = (conv_out_kernel - 1) // 2 + self.conv_out = nn.Conv2d( + block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding + ) + + self._set_pos_net_if_use_gligen(attention_type=attention_type, cross_attention_dim=cross_attention_dim) + + def _check_config( + self, + down_block_types: Tuple[str], + up_block_types: Tuple[str], + only_cross_attention: Union[bool, Tuple[bool]], + block_out_channels: Tuple[int], + layers_per_block: Union[int, Tuple[int]], + cross_attention_dim: Union[int, Tuple[int]], + transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple[int]]], + reverse_transformer_layers_per_block: bool, + attention_head_dim: int, + num_attention_heads: Optional[Union[int, Tuple[int]]], + ): + if len(down_block_types) != len(up_block_types): + raise ValueError( + f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}." + ) + + if len(block_out_channels) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}." + ) + + if not isinstance(only_cross_attention, bool) and len(only_cross_attention) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `only_cross_attention` as `down_block_types`. `only_cross_attention`: {only_cross_attention}. `down_block_types`: {down_block_types}." + ) + + if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`: {num_attention_heads}. `down_block_types`: {down_block_types}." + ) + + if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`: {attention_head_dim}. `down_block_types`: {down_block_types}." + ) + + if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`: {cross_attention_dim}. `down_block_types`: {down_block_types}." + ) + + if not isinstance(layers_per_block, int) and len(layers_per_block) != len(down_block_types): + raise ValueError( + f"Must provide the same number of `layers_per_block` as `down_block_types`. `layers_per_block`: {layers_per_block}. `down_block_types`: {down_block_types}." + ) + if isinstance(transformer_layers_per_block, list) and reverse_transformer_layers_per_block is None: + for layer_number_per_block in transformer_layers_per_block: + if isinstance(layer_number_per_block, list): + raise ValueError("Must provide 'reverse_transformer_layers_per_block` if using asymmetrical UNet.") + + def _set_time_proj( + self, + time_embedding_type: str, + block_out_channels: int, + flip_sin_to_cos: bool, + freq_shift: float, + time_embedding_dim: int, + ) -> Tuple[int, int]: + if time_embedding_type == "fourier": + time_embed_dim = time_embedding_dim or block_out_channels[0] * 2 + if time_embed_dim % 2 != 0: + raise ValueError(f"`time_embed_dim` should be divisible by 2, but is {time_embed_dim}.") + self.time_proj = GaussianFourierProjection( + time_embed_dim // 2, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos + ) + timestep_input_dim = time_embed_dim + elif time_embedding_type == "positional": + time_embed_dim = time_embedding_dim or block_out_channels[0] * 4 + + self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) + timestep_input_dim = block_out_channels[0] + else: + raise ValueError( + f"{time_embedding_type} does not exist. Please make sure to use one of `fourier` or `positional`." + ) + + return time_embed_dim, timestep_input_dim + + def _set_encoder_hid_proj( + self, + encoder_hid_dim_type: Optional[str], + cross_attention_dim: Union[int, Tuple[int]], + encoder_hid_dim: Optional[int], + ): + if encoder_hid_dim_type is None and encoder_hid_dim is not None: + encoder_hid_dim_type = "text_proj" + self.register_to_config(encoder_hid_dim_type=encoder_hid_dim_type) + logger.info("encoder_hid_dim_type defaults to 'text_proj' as `encoder_hid_dim` is defined.") + + if encoder_hid_dim is None and encoder_hid_dim_type is not None: + raise ValueError( + f"`encoder_hid_dim` has to be defined when `encoder_hid_dim_type` is set to {encoder_hid_dim_type}." + ) + + if encoder_hid_dim_type == "text_proj": + self.encoder_hid_proj = nn.Linear(encoder_hid_dim, cross_attention_dim) + elif encoder_hid_dim_type == "text_image_proj": + # image_embed_dim DOESN'T have to be `cross_attention_dim`. To not clutter the __init__ too much + # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use + # case when `addition_embed_type == "text_image_proj"` (Kandinsky 2.1)` + self.encoder_hid_proj = TextImageProjection( + text_embed_dim=encoder_hid_dim, + image_embed_dim=cross_attention_dim, + cross_attention_dim=cross_attention_dim, + ) + elif encoder_hid_dim_type == "image_proj": + # Kandinsky 2.2 + self.encoder_hid_proj = ImageProjection( + image_embed_dim=encoder_hid_dim, + cross_attention_dim=cross_attention_dim, + ) + elif encoder_hid_dim_type is not None: + raise ValueError( + f"encoder_hid_dim_type: {encoder_hid_dim_type} must be None, 'text_proj' or 'text_image_proj'." + ) + else: + self.encoder_hid_proj = None + + def _set_class_embedding( + self, + class_embed_type: Optional[str], + act_fn: str, + num_class_embeds: Optional[int], + projection_class_embeddings_input_dim: Optional[int], + time_embed_dim: int, + timestep_input_dim: int, + ): + if class_embed_type is None and num_class_embeds is not None: + self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) + elif class_embed_type == "timestep": + self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim, act_fn=act_fn) + elif class_embed_type == "identity": + self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim) + elif class_embed_type == "projection": + if projection_class_embeddings_input_dim is None: + raise ValueError( + "`class_embed_type`: 'projection' requires `projection_class_embeddings_input_dim` be set" + ) + # The projection `class_embed_type` is the same as the timestep `class_embed_type` except + # 1. the `class_labels` inputs are not first converted to sinusoidal embeddings + # 2. it projects from an arbitrary input dimension. + # + # Note that `TimestepEmbedding` is quite general, being mainly linear layers and activations. + # When used for embedding actual timesteps, the timesteps are first converted to sinusoidal embeddings. + # As a result, `TimestepEmbedding` can be passed arbitrary vectors. + self.class_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) + elif class_embed_type == "simple_projection": + if projection_class_embeddings_input_dim is None: + raise ValueError( + "`class_embed_type`: 'simple_projection' requires `projection_class_embeddings_input_dim` be set" + ) + self.class_embedding = nn.Linear(projection_class_embeddings_input_dim, time_embed_dim) + else: + self.class_embedding = None + + def _set_add_embedding( + self, + addition_embed_type: str, + addition_embed_type_num_heads: int, + addition_time_embed_dim: Optional[int], + flip_sin_to_cos: bool, + freq_shift: float, + cross_attention_dim: Optional[int], + encoder_hid_dim: Optional[int], + projection_class_embeddings_input_dim: Optional[int], + time_embed_dim: int, + ): + if addition_embed_type == "text": + if encoder_hid_dim is not None: + text_time_embedding_from_dim = encoder_hid_dim + else: + text_time_embedding_from_dim = cross_attention_dim + + self.add_embedding = TextTimeEmbedding( + text_time_embedding_from_dim, time_embed_dim, num_heads=addition_embed_type_num_heads + ) + elif addition_embed_type == "text_image": + # text_embed_dim and image_embed_dim DON'T have to be `cross_attention_dim`. To not clutter the __init__ too much + # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use + # case when `addition_embed_type == "text_image"` (Kandinsky 2.1)` + self.add_embedding = TextImageTimeEmbedding( + text_embed_dim=cross_attention_dim, image_embed_dim=cross_attention_dim, time_embed_dim=time_embed_dim + ) + elif addition_embed_type == "text_time": + self.add_time_proj = Timesteps(addition_time_embed_dim, flip_sin_to_cos, freq_shift) + self.add_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) + elif addition_embed_type == "image": + # Kandinsky 2.2 + self.add_embedding = ImageTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim) + elif addition_embed_type == "image_hint": + # Kandinsky 2.2 ControlNet + self.add_embedding = ImageHintTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim) + elif addition_embed_type is not None: + raise ValueError(f"addition_embed_type: {addition_embed_type} must be None, 'text' or 'text_image'.") + + def _set_pos_net_if_use_gligen(self, attention_type: str, cross_attention_dim: int): + if attention_type in ["gated", "gated-text-image"]: + positive_len = 768 + if isinstance(cross_attention_dim, int): + positive_len = cross_attention_dim + elif isinstance(cross_attention_dim, (list, tuple)): + positive_len = cross_attention_dim[0] + + feature_type = "text-only" if attention_type == "gated" else "text-image" + self.position_net = GLIGENTextBoundingboxProjection( + positive_len=positive_len, out_dim=cross_attention_dim, feature_type=feature_type + ) + + @property + def attn_processors(self) -> Dict[str, AttentionProcessor]: + r""" + Returns: + `dict` of attention processors: A dictionary containing all attention processors used in the model with + indexed by its weight name. + """ + # set recursively + processors = {} + + def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): + if hasattr(module, "get_processor"): + processors[f"{name}.processor"] = module.get_processor(return_deprecated_lora=True) + + for sub_name, child in module.named_children(): + fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) + + return processors + + for name, module in self.named_children(): + fn_recursive_add_processors(name, module, processors) + + return processors + + def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): + r""" + Sets the attention processor to use to compute attention. + + Parameters: + processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): + The instantiated processor class or a dictionary of processor classes that will be set as the processor + for **all** `Attention` layers. + + If `processor` is a dict, the key needs to define the path to the corresponding cross attention + processor. This is strongly recommended when setting trainable attention processors. + + """ + count = len(self.attn_processors.keys()) + + if isinstance(processor, dict) and len(processor) != count: + raise ValueError( + f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" + f" number of attention layers: {count}. Please make sure to pass {count} processor classes." + ) + + def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): + if hasattr(module, "set_processor"): + if not isinstance(processor, dict): + module.set_processor(processor) + else: + module.set_processor(processor.pop(f"{name}.processor")) + + for sub_name, child in module.named_children(): + fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) + + for name, module in self.named_children(): + fn_recursive_attn_processor(name, module, processor) + + def set_default_attn_processor(self): + """ + Disables custom attention processors and sets the default attention implementation. + """ + if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): + processor = AttnAddedKVProcessor() + elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): + processor = AttnProcessor() + else: + raise ValueError( + f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" + ) + + self.set_attn_processor(processor) + + def set_attention_slice(self, slice_size: Union[str, int, List[int]] = "auto"): + r""" + Enable sliced attention computation. + + When this option is enabled, the attention module splits the input tensor in slices to compute attention in + several steps. This is useful for saving some memory in exchange for a small decrease in speed. + + Args: + slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`): + When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If + `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is + provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` + must be a multiple of `slice_size`. + """ + sliceable_head_dims = [] + + def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module): + if hasattr(module, "set_attention_slice"): + sliceable_head_dims.append(module.sliceable_head_dim) + + for child in module.children(): + fn_recursive_retrieve_sliceable_dims(child) + + # retrieve number of attention layers + for module in self.children(): + fn_recursive_retrieve_sliceable_dims(module) + + num_sliceable_layers = len(sliceable_head_dims) + + if slice_size == "auto": + # half the attention head size is usually a good trade-off between + # speed and memory + slice_size = [dim // 2 for dim in sliceable_head_dims] + elif slice_size == "max": + # make smallest slice possible + slice_size = num_sliceable_layers * [1] + + slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size + + if len(slice_size) != len(sliceable_head_dims): + raise ValueError( + f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different" + f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}." + ) + + for i in range(len(slice_size)): + size = slice_size[i] + dim = sliceable_head_dims[i] + if size is not None and size > dim: + raise ValueError(f"size {size} has to be smaller or equal to {dim}.") + + # Recursively walk through all the children. + # Any children which exposes the set_attention_slice method + # gets the message + def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]): + if hasattr(module, "set_attention_slice"): + module.set_attention_slice(slice_size.pop()) + + for child in module.children(): + fn_recursive_set_attention_slice(child, slice_size) + + reversed_slice_size = list(reversed(slice_size)) + for module in self.children(): + fn_recursive_set_attention_slice(module, reversed_slice_size) + + def _set_gradient_checkpointing(self, module, value=False): + if hasattr(module, "gradient_checkpointing"): + module.gradient_checkpointing = value + + def enable_freeu(self, s1: float, s2: float, b1: float, b2: float): + r"""Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. + + The suffixes after the scaling factors represent the stage blocks where they are being applied. + + Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that + are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. + + Args: + s1 (`float`): + Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to + mitigate the "oversmoothing effect" in the enhanced denoising process. + s2 (`float`): + Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to + mitigate the "oversmoothing effect" in the enhanced denoising process. + b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features. + b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features. + """ + for i, upsample_block in enumerate(self.up_blocks): + setattr(upsample_block, "s1", s1) + setattr(upsample_block, "s2", s2) + setattr(upsample_block, "b1", b1) + setattr(upsample_block, "b2", b2) + + def disable_freeu(self): + """Disables the FreeU mechanism.""" + freeu_keys = {"s1", "s2", "b1", "b2"} + for i, upsample_block in enumerate(self.up_blocks): + for k in freeu_keys: + if hasattr(upsample_block, k) or getattr(upsample_block, k, None) is not None: + setattr(upsample_block, k, None) + + def fuse_qkv_projections(self): + """ + Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) + are fused. For cross-attention modules, key and value projection matrices are fused. + + + + This API is 🧪 experimental. + + + """ + self.original_attn_processors = None + + for _, attn_processor in self.attn_processors.items(): + if "Added" in str(attn_processor.__class__.__name__): + raise ValueError("`fuse_qkv_projections()` is not supported for models having added KV projections.") + + self.original_attn_processors = self.attn_processors + + for module in self.modules(): + if isinstance(module, Attention): + module.fuse_projections(fuse=True) + + def unfuse_qkv_projections(self): + """Disables the fused QKV projection if enabled. + + + + This API is 🧪 experimental. + + + + """ + if self.original_attn_processors is not None: + self.set_attn_processor(self.original_attn_processors) + + def get_time_embed( + self, sample: torch.Tensor, timestep: Union[torch.Tensor, float, int] + ) -> Optional[torch.Tensor]: + timesteps = timestep + if not torch.is_tensor(timesteps): + # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can + # This would be a good case for the `match` statement (Python 3.10+) + is_mps = sample.device.type == "mps" + if isinstance(timestep, float): + dtype = torch.float32 if is_mps else torch.float64 + else: + dtype = torch.int32 if is_mps else torch.int64 + timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) + elif len(timesteps.shape) == 0: + timesteps = timesteps[None].to(sample.device) + + # broadcast to batch dimension in a way that's compatible with ONNX/Core ML + timesteps = timesteps.expand(sample.shape[0]) + + t_emb = self.time_proj(timesteps) + # `Timesteps` does not contain any weights and will always return f32 tensors + # but time_embedding might actually be running in fp16. so we need to cast here. + # there might be better ways to encapsulate this. + t_emb = t_emb.to(dtype=sample.dtype) + return t_emb + + def get_class_embed(self, sample: torch.Tensor, class_labels: Optional[torch.Tensor]) -> Optional[torch.Tensor]: + class_emb = None + if self.class_embedding is not None: + if class_labels is None: + raise ValueError("class_labels should be provided when num_class_embeds > 0") + + if self.config.class_embed_type == "timestep": + class_labels = self.time_proj(class_labels) + + # `Timesteps` does not contain any weights and will always return f32 tensors + # there might be better ways to encapsulate this. + class_labels = class_labels.to(dtype=sample.dtype) + + class_emb = self.class_embedding(class_labels).to(dtype=sample.dtype) + return class_emb + + def get_aug_embed( + self, emb: torch.Tensor, encoder_hidden_states: torch.Tensor, added_cond_kwargs: Dict[str, Any] + ) -> Optional[torch.Tensor]: + aug_emb = None + if self.config.addition_embed_type == "text": + aug_emb = self.add_embedding(encoder_hidden_states) + elif self.config.addition_embed_type == "text_image": + # Kandinsky 2.1 - style + if "image_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `addition_embed_type` set to 'text_image' which requires the keyword argument `image_embeds` to be passed in `added_cond_kwargs`" + ) + + image_embs = added_cond_kwargs.get("image_embeds") + text_embs = added_cond_kwargs.get("text_embeds", encoder_hidden_states) + aug_emb = self.add_embedding(text_embs, image_embs) + elif self.config.addition_embed_type == "text_time": + # SDXL - style + if "text_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `text_embeds` to be passed in `added_cond_kwargs`" + ) + text_embeds = added_cond_kwargs.get("text_embeds") + if "time_ids" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `time_ids` to be passed in `added_cond_kwargs`" + ) + time_ids = added_cond_kwargs.get("time_ids") + time_embeds = self.add_time_proj(time_ids.flatten()) + time_embeds = time_embeds.reshape((text_embeds.shape[0], -1)) + add_embeds = torch.concat([text_embeds, time_embeds], dim=-1) + add_embeds = add_embeds.to(emb.dtype) + aug_emb = self.add_embedding(add_embeds) + elif self.config.addition_embed_type == "image": + # Kandinsky 2.2 - style + if "image_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `addition_embed_type` set to 'image' which requires the keyword argument `image_embeds` to be passed in `added_cond_kwargs`" + ) + image_embs = added_cond_kwargs.get("image_embeds") + aug_emb = self.add_embedding(image_embs) + elif self.config.addition_embed_type == "image_hint": + # Kandinsky 2.2 - style + if "image_embeds" not in added_cond_kwargs or "hint" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `addition_embed_type` set to 'image_hint' which requires the keyword arguments `image_embeds` and `hint` to be passed in `added_cond_kwargs`" + ) + image_embs = added_cond_kwargs.get("image_embeds") + hint = added_cond_kwargs.get("hint") + aug_emb = self.add_embedding(image_embs, hint) + return aug_emb + + def process_encoder_hidden_states( + self, encoder_hidden_states: torch.Tensor, added_cond_kwargs: Dict[str, Any] + ) -> torch.Tensor: + if self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_proj": + encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states) + elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_image_proj": + # Kandinsky 2.1 - style + if "image_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'text_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" + ) + + image_embeds = added_cond_kwargs.get("image_embeds") + encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states, image_embeds) + elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "image_proj": + # Kandinsky 2.2 - style + if "image_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" + ) + image_embeds = added_cond_kwargs.get("image_embeds") + encoder_hidden_states = self.encoder_hid_proj(image_embeds) + elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "ip_image_proj": + if "image_embeds" not in added_cond_kwargs: + raise ValueError( + f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'ip_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" + ) + image_embeds = added_cond_kwargs.get("image_embeds") + image_embeds = self.encoder_hid_proj(image_embeds) + encoder_hidden_states = (encoder_hidden_states, image_embeds) + return encoder_hidden_states + + def forward( + self, + sample: torch.Tensor, + timestep: Union[torch.Tensor, float, int], + encoder_hidden_states: torch.Tensor, + class_labels: Optional[torch.Tensor] = None, + timestep_cond: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.Tensor] = None, + cross_attention_kwargs: Optional[Dict[str, Any]] = None, + added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None, + down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None, + mid_block_additional_residual: Optional[torch.Tensor] = None, + down_intrablock_additional_residuals: Optional[Tuple[torch.Tensor]] = None, + encoder_attention_mask: Optional[torch.Tensor] = None, + controls: Optional[Dict[str, torch.Tensor]] = None, + return_dict: bool = True, + ) -> Union[UNet2DConditionOutput, Tuple]: + r""" + The [`UNet2DConditionModel`] forward method. + + Args: + sample (`torch.Tensor`): + The noisy input tensor with the following shape `(batch, channel, height, width)`. + timestep (`torch.Tensor` or `float` or `int`): The number of timesteps to denoise an input. + encoder_hidden_states (`torch.Tensor`): + The encoder hidden states with shape `(batch, sequence_length, feature_dim)`. + class_labels (`torch.Tensor`, *optional*, defaults to `None`): + Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. + timestep_cond: (`torch.Tensor`, *optional*, defaults to `None`): + Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed + through the `self.time_embedding` layer to obtain the timestep embeddings. + attention_mask (`torch.Tensor`, *optional*, defaults to `None`): + An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask + is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large + negative values to the attention scores corresponding to "discard" tokens. + cross_attention_kwargs (`dict`, *optional*): + A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under + `self.processor` in + [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). + added_cond_kwargs: (`dict`, *optional*): + A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that + are passed along to the UNet blocks. + down_block_additional_residuals: (`tuple` of `torch.Tensor`, *optional*): + A tuple of tensors that if specified are added to the residuals of down unet blocks. + mid_block_additional_residual: (`torch.Tensor`, *optional*): + A tensor that if specified is added to the residual of the middle unet block. + down_intrablock_additional_residuals (`tuple` of `torch.Tensor`, *optional*): + additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) + encoder_attention_mask (`torch.Tensor`): + A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If + `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias, + which adds large negative values to the attention scores corresponding to "discard" tokens. + return_dict (`bool`, *optional*, defaults to `True`): + Whether or not to return a [`~models.unets.unet_2d_condition.UNet2DConditionOutput`] instead of a plain + tuple. + + Returns: + [`~models.unets.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: + If `return_dict` is True, an [`~models.unets.unet_2d_condition.UNet2DConditionOutput`] is returned, + otherwise a `tuple` is returned where the first element is the sample tensor. + """ + # By default samples have to be AT least a multiple of the overall upsampling factor. + # The overall upsampling factor is equal to 2 ** (# num of upsampling layers). + # However, the upsampling interpolation output size can be forced to fit any upsampling size + # on the fly if necessary. + default_overall_up_factor = 2**self.num_upsamplers + + # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` + forward_upsample_size = False + upsample_size = None + + for dim in sample.shape[-2:]: + if dim % default_overall_up_factor != 0: + # Forward upsample size to force interpolation output size. + forward_upsample_size = True + break + + # ensure attention_mask is a bias, and give it a singleton query_tokens dimension + # expects mask of shape: + # [batch, key_tokens] + # adds singleton query_tokens dimension: + # [batch, 1, key_tokens] + # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: + # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) + # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) + if attention_mask is not None: + # assume that mask is expressed as: + # (1 = keep, 0 = discard) + # convert mask into a bias that can be added to attention scores: + # (keep = +0, discard = -10000.0) + attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0 + attention_mask = attention_mask.unsqueeze(1) + + # convert encoder_attention_mask to a bias the same way we do for attention_mask + if encoder_attention_mask is not None: + encoder_attention_mask = (1 - encoder_attention_mask.to(sample.dtype)) * -10000.0 + encoder_attention_mask = encoder_attention_mask.unsqueeze(1) + + # 0. center input if necessary + if self.config.center_input_sample: + sample = 2 * sample - 1.0 + + # 1. time + t_emb = self.get_time_embed(sample=sample, timestep=timestep) + emb = self.time_embedding(t_emb, timestep_cond) + aug_emb = None + + class_emb = self.get_class_embed(sample=sample, class_labels=class_labels) + if class_emb is not None: + if self.config.class_embeddings_concat: + emb = torch.cat([emb, class_emb], dim=-1) + else: + emb = emb + class_emb + + aug_emb = self.get_aug_embed( + emb=emb, encoder_hidden_states=encoder_hidden_states, added_cond_kwargs=added_cond_kwargs + ) + if self.config.addition_embed_type == "image_hint": + aug_emb, hint = aug_emb + sample = torch.cat([sample, hint], dim=1) + + emb = emb + aug_emb if aug_emb is not None else emb + + if self.time_embed_act is not None: + emb = self.time_embed_act(emb) + + encoder_hidden_states = self.process_encoder_hidden_states( + encoder_hidden_states=encoder_hidden_states, added_cond_kwargs=added_cond_kwargs + ) + + # 2. pre-process + sample = self.conv_in(sample) + + # 2.5 GLIGEN position net + if cross_attention_kwargs is not None and cross_attention_kwargs.get("gligen", None) is not None: + cross_attention_kwargs = cross_attention_kwargs.copy() + gligen_args = cross_attention_kwargs.pop("gligen") + cross_attention_kwargs["gligen"] = {"objs": self.position_net(**gligen_args)} + + # 3. down + # we're popping the `scale` instead of getting it because otherwise `scale` will be propagated + # to the internal blocks and will raise deprecation warnings. this will be confusing for our users. + if cross_attention_kwargs is not None: + cross_attention_kwargs = cross_attention_kwargs.copy() + lora_scale = cross_attention_kwargs.pop("scale", 1.0) + else: + lora_scale = 1.0 + + if USE_PEFT_BACKEND: + # weight the lora layers by setting `lora_scale` for each PEFT layer + scale_lora_layers(self, lora_scale) + + is_controlnet = mid_block_additional_residual is not None and down_block_additional_residuals is not None + is_controlnext = controls is not None + # using new arg down_intrablock_additional_residuals for T2I-Adapters, to distinguish from controlnets + is_adapter = down_intrablock_additional_residuals is not None + # maintain backward compatibility for legacy usage, where + # T2I-Adapter and ControlNet both use down_block_additional_residuals arg + # but can only use one or the other + if not is_adapter and mid_block_additional_residual is None and down_block_additional_residuals is not None: + deprecate( + "T2I should not use down_block_additional_residuals", + "1.3.0", + "Passing intrablock residual connections with `down_block_additional_residuals` is deprecated \ + and will be removed in diffusers 1.3.0. `down_block_additional_residuals` should only be used \ + for ControlNet. Please make sure use `down_intrablock_additional_residuals` instead. ", + standard_warn=False, + ) + down_intrablock_additional_residuals = down_block_additional_residuals + is_adapter = True + + down_block_res_samples = (sample,) + + if is_controlnext: + scale = controls['scale'] + controls = controls['out'].to(sample) + mean_latents, std_latents = torch.mean(sample, dim=(1, 2, 3), keepdim=True), torch.std(sample, dim=(1, 2, 3), keepdim=True) + mean_control, std_control = torch.mean(controls, dim=(1, 2, 3), keepdim=True), torch.std(controls, dim=(1, 2, 3), keepdim=True) + controls = (controls - mean_control) * (std_latents / (std_control + 1e-12)) + mean_latents + controls = nn.functional.adaptive_avg_pool2d(controls, sample.shape[-2:]) + sample = sample + controls * scale + + for i, downsample_block in enumerate(self.down_blocks): + if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: + # For t2i-adapter CrossAttnDownBlock2D + additional_residuals = {} + if is_adapter and len(down_intrablock_additional_residuals) > 0: + additional_residuals["additional_residuals"] = down_intrablock_additional_residuals.pop(0) + + sample, res_samples = downsample_block( + hidden_states=sample, + temb=emb, + encoder_hidden_states=encoder_hidden_states, + attention_mask=attention_mask, + cross_attention_kwargs=cross_attention_kwargs, + encoder_attention_mask=encoder_attention_mask, + **additional_residuals, + ) + else: + sample, res_samples = downsample_block(hidden_states=sample, temb=emb) + if is_adapter and len(down_intrablock_additional_residuals) > 0: + sample += down_intrablock_additional_residuals.pop(0) + + down_block_res_samples += res_samples + + if is_controlnet: + new_down_block_res_samples = () + + for down_block_res_sample, down_block_additional_residual in zip( + down_block_res_samples, down_block_additional_residuals + ): + down_block_res_sample = down_block_res_sample + down_block_additional_residual + new_down_block_res_samples = new_down_block_res_samples + (down_block_res_sample,) + + down_block_res_samples = new_down_block_res_samples + + # 4. mid + if self.mid_block is not None: + if hasattr(self.mid_block, "has_cross_attention") and self.mid_block.has_cross_attention: + sample = self.mid_block( + sample, + emb, + encoder_hidden_states=encoder_hidden_states, + attention_mask=attention_mask, + cross_attention_kwargs=cross_attention_kwargs, + encoder_attention_mask=encoder_attention_mask, + ) + else: + sample = self.mid_block(sample, emb) + + # To support T2I-Adapter-XL + if ( + is_adapter + and len(down_intrablock_additional_residuals) > 0 + and sample.shape == down_intrablock_additional_residuals[0].shape + ): + sample += down_intrablock_additional_residuals.pop(0) + + if is_controlnet: + sample = sample + mid_block_additional_residual + + # 5. up + for i, upsample_block in enumerate(self.up_blocks): + is_final_block = i == len(self.up_blocks) - 1 + + res_samples = down_block_res_samples[-len(upsample_block.resnets):] + down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] + + # if we have not reached the final block and need to forward the + # upsample size, we do it here + if not is_final_block and forward_upsample_size: + upsample_size = down_block_res_samples[-1].shape[2:] + + if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: + sample = upsample_block( + hidden_states=sample, + temb=emb, + res_hidden_states_tuple=res_samples, + encoder_hidden_states=encoder_hidden_states, + cross_attention_kwargs=cross_attention_kwargs, + upsample_size=upsample_size, + attention_mask=attention_mask, + encoder_attention_mask=encoder_attention_mask, + ) + else: + sample = upsample_block( + hidden_states=sample, + temb=emb, + res_hidden_states_tuple=res_samples, + upsample_size=upsample_size, + ) + + # 6. post-process + if self.conv_norm_out: + sample = self.conv_norm_out(sample) + sample = self.conv_act(sample) + sample = self.conv_out(sample) + + if USE_PEFT_BACKEND: + # remove `lora_scale` from each PEFT layer + unscale_lora_layers(self, lora_scale) + + if not return_dict: + return (sample,) + + return UNet2DConditionOutput(sample=sample) diff --git a/ControlNeXt-SDXL-Training/pipeline/pipeline_controlnext.py b/ControlNeXt-SDXL-Training/pipeline/pipeline_controlnext.py new file mode 100644 index 0000000..fa6d2cc --- /dev/null +++ b/ControlNeXt-SDXL-Training/pipeline/pipeline_controlnext.py @@ -0,0 +1,1378 @@ +# Copyright 2024 The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import inspect +from typing import Any, Callable, Dict, List, Optional, Tuple, Union +from packaging import version +import torch +from transformers import ( + CLIPImageProcessor, + CLIPTextModel, + CLIPTextModelWithProjection, + CLIPTokenizer, + CLIPVisionModelWithProjection, +) + +from diffusers.callbacks import MultiPipelineCallbacks, PipelineCallback +from diffusers.image_processor import PipelineImageInput, VaeImageProcessor +from diffusers.loaders import ( + FromSingleFileMixin, + IPAdapterMixin, + StableDiffusionXLLoraLoaderMixin, + TextualInversionLoaderMixin, +) +from diffusers.models import AutoencoderKL, ImageProjection, UNet2DConditionModel +from diffusers.models.attention_processor import ( + AttnProcessor2_0, + FusedAttnProcessor2_0, + LoRAAttnProcessor2_0, + LoRAXFormersAttnProcessor, + XFormersAttnProcessor, +) +from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel +from models.controlnet import ControlNetModel +from diffusers.models.lora import adjust_lora_scale_text_encoder +from diffusers.schedulers import KarrasDiffusionSchedulers +from diffusers.utils import ( + USE_PEFT_BACKEND, + deprecate, + is_invisible_watermark_available, + is_torch_xla_available, + logging, + replace_example_docstring, + scale_lora_layers, + unscale_lora_layers, +) +from diffusers.utils.torch_utils import randn_tensor +from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin +from diffusers.pipelines.stable_diffusion_xl.pipeline_output import StableDiffusionXLPipelineOutput + +if is_invisible_watermark_available(): + from diffusers.pipelines.stable_diffusion_xl.watermark import StableDiffusionXLWatermarker + +if is_torch_xla_available(): + import torch_xla.core.xla_model as xm + + XLA_AVAILABLE = True +else: + XLA_AVAILABLE = False + + +logger = logging.get_logger(__name__) # pylint: disable=invalid-name + +EXAMPLE_DOC_STRING = """ + Examples: + ```py + >>> import torch + >>> from diffusers import StableDiffusionXLPipeline + + >>> pipe = StableDiffusionXLPipeline.from_pretrained( + ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 + ... ) + >>> pipe = pipe.to("cuda") + + >>> prompt = "a photo of an astronaut riding a horse on mars" + >>> image = pipe(prompt).images[0] + ``` +""" + + +# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg +def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): + """ + Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and + Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4 + """ + std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) + std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) + # rescale the results from guidance (fixes overexposure) + noise_pred_rescaled = noise_cfg * (std_text / std_cfg) + # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images + noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg + return noise_cfg + + +# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps +def retrieve_timesteps( + scheduler, + num_inference_steps: Optional[int] = None, + device: Optional[Union[str, torch.device]] = None, + timesteps: Optional[List[int]] = None, + sigmas: Optional[List[float]] = None, + **kwargs, +): + """ + Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles + custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. + + Args: + scheduler (`SchedulerMixin`): + The scheduler to get timesteps from. + num_inference_steps (`int`): + The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` + must be `None`. + device (`str` or `torch.device`, *optional*): + The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. + timesteps (`List[int]`, *optional*): + Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed, + `num_inference_steps` and `sigmas` must be `None`. + sigmas (`List[float]`, *optional*): + Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed, + `num_inference_steps` and `timesteps` must be `None`. + + Returns: + `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the + second element is the number of inference steps. + """ + if timesteps is not None and sigmas is not None: + raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values") + if timesteps is not None: + accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) + if not accepts_timesteps: + raise ValueError( + f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" + f" timestep schedules. Please check whether you are using the correct scheduler." + ) + scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) + timesteps = scheduler.timesteps + num_inference_steps = len(timesteps) + elif sigmas is not None: + accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) + if not accept_sigmas: + raise ValueError( + f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" + f" sigmas schedules. Please check whether you are using the correct scheduler." + ) + scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs) + timesteps = scheduler.timesteps + num_inference_steps = len(timesteps) + else: + scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) + timesteps = scheduler.timesteps + return timesteps, num_inference_steps + + +class StableDiffusionXLControlNeXtPipeline( + DiffusionPipeline, + StableDiffusionMixin, + FromSingleFileMixin, + StableDiffusionXLLoraLoaderMixin, + TextualInversionLoaderMixin, + IPAdapterMixin, +): + r""" + Pipeline for text-to-image generation using Stable Diffusion XL. + + This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the + library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + + The pipeline also inherits the following loading methods: + - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings + - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files + - [`~loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] for loading LoRA weights + - [`~loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights`] for saving LoRA weights + - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters + + Args: + vae ([`AutoencoderKL`]): + Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + text_encoder ([`CLIPTextModel`]): + Frozen text-encoder. Stable Diffusion XL uses the text portion of + [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically + the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. + text_encoder_2 ([` CLIPTextModelWithProjection`]): + Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of + [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), + specifically the + [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) + variant. + tokenizer (`CLIPTokenizer`): + Tokenizer of class + [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). + tokenizer_2 (`CLIPTokenizer`): + Second Tokenizer of class + [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). + unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. + scheduler ([`SchedulerMixin`]): + A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of + [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. + force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`): + Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of + `stabilityai/stable-diffusion-xl-base-1-0`. + add_watermarker (`bool`, *optional*): + Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to + watermark output images. If not defined, it will default to True if the package is installed, otherwise no + watermarker will be used. + """ + + model_cpu_offload_seq = "text_encoder->text_encoder_2->image_encoder->unet->vae" + _optional_components = [ + "tokenizer", + "tokenizer_2", + "text_encoder", + "text_encoder_2", + "image_encoder", + "feature_extractor", + ] + _callback_tensor_inputs = [ + "latents", + "prompt_embeds", + "negative_prompt_embeds", + "add_text_embeds", + "add_time_ids", + "negative_pooled_prompt_embeds", + "negative_add_time_ids", + ] + + def __init__( + self, + vae: AutoencoderKL, + text_encoder: CLIPTextModel, + text_encoder_2: CLIPTextModelWithProjection, + tokenizer: CLIPTokenizer, + tokenizer_2: CLIPTokenizer, + unet: UNet2DConditionModel, + scheduler: KarrasDiffusionSchedulers, + controlnet: Optional[Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel]] = None, + image_encoder: CLIPVisionModelWithProjection = None, + feature_extractor: CLIPImageProcessor = None, + force_zeros_for_empty_prompt: bool = True, + add_watermarker: Optional[bool] = None, + ): + super().__init__() + + self.register_modules( + vae=vae, + text_encoder=text_encoder, + text_encoder_2=text_encoder_2, + tokenizer=tokenizer, + tokenizer_2=tokenizer_2, + unet=unet, + scheduler=scheduler, + image_encoder=image_encoder, + feature_extractor=feature_extractor, + controlnet=controlnet, + ) + self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt) + self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) + self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) + self.control_image_processor = VaeImageProcessor( + vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True, do_normalize=False + ) + + self.default_sample_size = self.unet.config.sample_size + + add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available() + + if add_watermarker: + self.watermark = StableDiffusionXLWatermarker() + else: + self.watermark = None + + def prepare_image( + self, + image, + width, + height, + batch_size, + num_images_per_prompt, + device, + dtype, + do_classifier_free_guidance=False, + guess_mode=False, + ): + image = self.control_image_processor.preprocess(image, height=height, width=width).to(dtype=torch.float32) + image_batch_size = image.shape[0] + + if image_batch_size == 1: + repeat_by = batch_size + else: + # image batch size is the same as prompt batch size + repeat_by = num_images_per_prompt + + image = image.repeat_interleave(repeat_by, dim=0) + + image = image.to(device=device, dtype=dtype) + + if do_classifier_free_guidance and not guess_mode: + image = torch.cat([image] * 2) + + return image + + def encode_prompt( + self, + prompt: str, + prompt_2: Optional[str] = None, + device: Optional[torch.device] = None, + num_images_per_prompt: int = 1, + do_classifier_free_guidance: bool = True, + negative_prompt: Optional[str] = None, + negative_prompt_2: Optional[str] = None, + prompt_embeds: Optional[torch.Tensor] = None, + negative_prompt_embeds: Optional[torch.Tensor] = None, + pooled_prompt_embeds: Optional[torch.Tensor] = None, + negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, + lora_scale: Optional[float] = None, + clip_skip: Optional[int] = None, + ): + r""" + Encodes the prompt into text encoder hidden states. + + Args: + prompt (`str` or `List[str]`, *optional*): + prompt to be encoded + prompt_2 (`str` or `List[str]`, *optional*): + The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is + used in both text-encoders + device: (`torch.device`): + torch device + num_images_per_prompt (`int`): + number of images that should be generated per prompt + do_classifier_free_guidance (`bool`): + whether to use classifier free guidance or not + negative_prompt (`str` or `List[str]`, *optional*): + The prompt or prompts not to guide the image generation. If not defined, one has to pass + `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is + less than `1`). + negative_prompt_2 (`str` or `List[str]`, *optional*): + The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and + `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders + prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not + provided, text embeddings will be generated from `prompt` input argument. + negative_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt + weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input + argument. + pooled_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. + If not provided, pooled text embeddings will be generated from `prompt` input argument. + negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt + weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` + input argument. + lora_scale (`float`, *optional*): + A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. + clip_skip (`int`, *optional*): + Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that + the output of the pre-final layer will be used for computing the prompt embeddings. + """ + device = device or self._execution_device + + # set lora scale so that monkey patched LoRA + # function of text encoder can correctly access it + if lora_scale is not None and isinstance(self, StableDiffusionXLLoraLoaderMixin): + self._lora_scale = lora_scale + + # dynamically adjust the LoRA scale + if self.text_encoder is not None: + if not USE_PEFT_BACKEND: + adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) + else: + scale_lora_layers(self.text_encoder, lora_scale) + + if self.text_encoder_2 is not None: + if not USE_PEFT_BACKEND: + adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale) + else: + scale_lora_layers(self.text_encoder_2, lora_scale) + + prompt = [prompt] if isinstance(prompt, str) else prompt + + if prompt is not None: + batch_size = len(prompt) + else: + batch_size = prompt_embeds.shape[0] + + # Define tokenizers and text encoders + tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2] + text_encoders = ( + [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2] + ) + + if prompt_embeds is None: + prompt_2 = prompt_2 or prompt + prompt_2 = [prompt_2] if isinstance(prompt_2, str) else prompt_2 + + # textual inversion: process multi-vector tokens if necessary + prompt_embeds_list = [] + prompts = [prompt, prompt_2] + for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders): + if isinstance(self, TextualInversionLoaderMixin): + prompt = self.maybe_convert_prompt(prompt, tokenizer) + + text_inputs = tokenizer( + prompt, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + + text_input_ids = text_inputs.input_ids + untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids + + if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( + text_input_ids, untruncated_ids + ): + removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1: -1]) + logger.warning( + "The following part of your input was truncated because CLIP can only handle sequences up to" + f" {tokenizer.model_max_length} tokens: {removed_text}" + ) + + prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True) + + # We are only ALWAYS interested in the pooled output of the final text encoder + pooled_prompt_embeds = prompt_embeds[0] + if clip_skip is None: + prompt_embeds = prompt_embeds.hidden_states[-2] + else: + # "2" because SDXL always indexes from the penultimate layer. + prompt_embeds = prompt_embeds.hidden_states[-(clip_skip + 2)] + + prompt_embeds_list.append(prompt_embeds) + + prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) + + # get unconditional embeddings for classifier free guidance + zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt + if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt: + negative_prompt_embeds = torch.zeros_like(prompt_embeds) + negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds) + elif do_classifier_free_guidance and negative_prompt_embeds is None: + negative_prompt = negative_prompt or "" + negative_prompt_2 = negative_prompt_2 or negative_prompt + + # normalize str to list + negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt + negative_prompt_2 = ( + batch_size * [negative_prompt_2] if isinstance(negative_prompt_2, str) else negative_prompt_2 + ) + + uncond_tokens: List[str] + if prompt is not None and type(prompt) is not type(negative_prompt): + raise TypeError( + f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" + f" {type(prompt)}." + ) + elif batch_size != len(negative_prompt): + raise ValueError( + f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" + f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" + " the batch size of `prompt`." + ) + else: + uncond_tokens = [negative_prompt, negative_prompt_2] + + negative_prompt_embeds_list = [] + for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders): + if isinstance(self, TextualInversionLoaderMixin): + negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer) + + max_length = prompt_embeds.shape[1] + uncond_input = tokenizer( + negative_prompt, + padding="max_length", + max_length=max_length, + truncation=True, + return_tensors="pt", + ) + + negative_prompt_embeds = text_encoder( + uncond_input.input_ids.to(device), + output_hidden_states=True, + ) + # We are only ALWAYS interested in the pooled output of the final text encoder + negative_pooled_prompt_embeds = negative_prompt_embeds[0] + negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2] + + negative_prompt_embeds_list.append(negative_prompt_embeds) + + negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1) + + if self.text_encoder_2 is not None: + prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) + else: + prompt_embeds = prompt_embeds.to(dtype=self.unet.dtype, device=device) + + bs_embed, seq_len, _ = prompt_embeds.shape + # duplicate text embeddings for each generation per prompt, using mps friendly method + prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) + prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) + + if do_classifier_free_guidance: + # duplicate unconditional embeddings for each generation per prompt, using mps friendly method + seq_len = negative_prompt_embeds.shape[1] + + if self.text_encoder_2 is not None: + negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) + else: + negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.unet.dtype, device=device) + + negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) + negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) + + pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( + bs_embed * num_images_per_prompt, -1 + ) + if do_classifier_free_guidance: + negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( + bs_embed * num_images_per_prompt, -1 + ) + + if self.text_encoder is not None: + if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND: + # Retrieve the original scale by scaling back the LoRA layers + unscale_lora_layers(self.text_encoder, lora_scale) + + if self.text_encoder_2 is not None: + if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND: + # Retrieve the original scale by scaling back the LoRA layers + unscale_lora_layers(self.text_encoder_2, lora_scale) + + return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds + + # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_image + def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None): + dtype = next(self.image_encoder.parameters()).dtype + + if not isinstance(image, torch.Tensor): + image = self.feature_extractor(image, return_tensors="pt").pixel_values + + image = image.to(device=device, dtype=dtype) + if output_hidden_states: + image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2] + image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) + uncond_image_enc_hidden_states = self.image_encoder( + torch.zeros_like(image), output_hidden_states=True + ).hidden_states[-2] + uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave( + num_images_per_prompt, dim=0 + ) + return image_enc_hidden_states, uncond_image_enc_hidden_states + else: + image_embeds = self.image_encoder(image).image_embeds + image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) + uncond_image_embeds = torch.zeros_like(image_embeds) + + return image_embeds, uncond_image_embeds + + # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_ip_adapter_image_embeds + def prepare_ip_adapter_image_embeds( + self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt, do_classifier_free_guidance + ): + if ip_adapter_image_embeds is None: + if not isinstance(ip_adapter_image, list): + ip_adapter_image = [ip_adapter_image] + + if len(ip_adapter_image) != len(self.unet.encoder_hid_proj.image_projection_layers): + raise ValueError( + f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." + ) + + image_embeds = [] + for single_ip_adapter_image, image_proj_layer in zip( + ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers + ): + output_hidden_state = not isinstance(image_proj_layer, ImageProjection) + single_image_embeds, single_negative_image_embeds = self.encode_image( + single_ip_adapter_image, device, 1, output_hidden_state + ) + single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0) + single_negative_image_embeds = torch.stack( + [single_negative_image_embeds] * num_images_per_prompt, dim=0 + ) + + if do_classifier_free_guidance: + single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) + single_image_embeds = single_image_embeds.to(device) + + image_embeds.append(single_image_embeds) + else: + repeat_dims = [1] + image_embeds = [] + for single_image_embeds in ip_adapter_image_embeds: + if do_classifier_free_guidance: + single_negative_image_embeds, single_image_embeds = single_image_embeds.chunk(2) + single_image_embeds = single_image_embeds.repeat( + num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:])) + ) + single_negative_image_embeds = single_negative_image_embeds.repeat( + num_images_per_prompt, *(repeat_dims * len(single_negative_image_embeds.shape[1:])) + ) + single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) + else: + single_image_embeds = single_image_embeds.repeat( + num_images_per_prompt, *(repeat_dims * len(single_image_embeds.shape[1:])) + ) + image_embeds.append(single_image_embeds) + + return image_embeds + + # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs + def prepare_extra_step_kwargs(self, generator, eta): + # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature + # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. + # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 + # and should be between [0, 1] + + accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) + extra_step_kwargs = {} + if accepts_eta: + extra_step_kwargs["eta"] = eta + + # check if the scheduler accepts generator + accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) + if accepts_generator: + extra_step_kwargs["generator"] = generator + return extra_step_kwargs + + def check_inputs( + self, + prompt, + prompt_2, + height, + width, + callback_steps, + negative_prompt=None, + negative_prompt_2=None, + prompt_embeds=None, + negative_prompt_embeds=None, + pooled_prompt_embeds=None, + negative_pooled_prompt_embeds=None, + ip_adapter_image=None, + ip_adapter_image_embeds=None, + callback_on_step_end_tensor_inputs=None, + ): + if height % 8 != 0 or width % 8 != 0: + raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") + + if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): + raise ValueError( + f"`callback_steps` has to be a positive integer but is {callback_steps} of type" + f" {type(callback_steps)}." + ) + + if callback_on_step_end_tensor_inputs is not None and not all( + k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs + ): + raise ValueError( + f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" + ) + + if prompt is not None and prompt_embeds is not None: + raise ValueError( + f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" + " only forward one of the two." + ) + elif prompt_2 is not None and prompt_embeds is not None: + raise ValueError( + f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to" + " only forward one of the two." + ) + elif prompt is None and prompt_embeds is None: + raise ValueError( + "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." + ) + elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): + raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") + elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)): + raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}") + + if negative_prompt is not None and negative_prompt_embeds is not None: + raise ValueError( + f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" + f" {negative_prompt_embeds}. Please make sure to only forward one of the two." + ) + elif negative_prompt_2 is not None and negative_prompt_embeds is not None: + raise ValueError( + f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:" + f" {negative_prompt_embeds}. Please make sure to only forward one of the two." + ) + + if prompt_embeds is not None and negative_prompt_embeds is not None: + if prompt_embeds.shape != negative_prompt_embeds.shape: + raise ValueError( + "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" + f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" + f" {negative_prompt_embeds.shape}." + ) + + if prompt_embeds is not None and pooled_prompt_embeds is None: + raise ValueError( + "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`." + ) + + if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None: + raise ValueError( + "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`." + ) + + if ip_adapter_image is not None and ip_adapter_image_embeds is not None: + raise ValueError( + "Provide either `ip_adapter_image` or `ip_adapter_image_embeds`. Cannot leave both `ip_adapter_image` and `ip_adapter_image_embeds` defined." + ) + + if ip_adapter_image_embeds is not None: + if not isinstance(ip_adapter_image_embeds, list): + raise ValueError( + f"`ip_adapter_image_embeds` has to be of type `list` but is {type(ip_adapter_image_embeds)}" + ) + elif ip_adapter_image_embeds[0].ndim not in [3, 4]: + raise ValueError( + f"`ip_adapter_image_embeds` has to be a list of 3D or 4D tensors but is {ip_adapter_image_embeds[0].ndim}D" + ) + + # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents + def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): + shape = ( + batch_size, + num_channels_latents, + int(height) // self.vae_scale_factor, + int(width) // self.vae_scale_factor, + ) + if isinstance(generator, list) and len(generator) != batch_size: + raise ValueError( + f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" + f" size of {batch_size}. Make sure the batch size matches the length of the generators." + ) + + if latents is None: + latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) + else: + latents = latents.to(device) + + # scale the initial noise by the standard deviation required by the scheduler + latents = latents * self.scheduler.init_noise_sigma + return latents + + def _get_add_time_ids( + self, original_size, crops_coords_top_left, target_size, dtype, text_encoder_projection_dim=None + ): + add_time_ids = list(original_size + crops_coords_top_left + target_size) + + passed_add_embed_dim = ( + self.unet.config.addition_time_embed_dim * len(add_time_ids) + text_encoder_projection_dim + ) + expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features + + if expected_add_embed_dim != passed_add_embed_dim: + raise ValueError( + f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`." + ) + + add_time_ids = torch.tensor([add_time_ids], dtype=dtype) + return add_time_ids + + def upcast_vae(self): + dtype = self.vae.dtype + self.vae.to(dtype=torch.float32) + use_torch_2_0_or_xformers = isinstance( + self.vae.decoder.mid_block.attentions[0].processor, + ( + AttnProcessor2_0, + XFormersAttnProcessor, + LoRAXFormersAttnProcessor, + LoRAAttnProcessor2_0, + FusedAttnProcessor2_0, + ), + ) + # if xformers or torch_2_0 is used attention block does not need + # to be in float32 which can save lots of memory + if use_torch_2_0_or_xformers: + self.vae.post_quant_conv.to(dtype) + self.vae.decoder.conv_in.to(dtype) + self.vae.decoder.mid_block.to(dtype) + + # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding + def get_guidance_scale_embedding( + self, w: torch.Tensor, embedding_dim: int = 512, dtype: torch.dtype = torch.float32 + ) -> torch.Tensor: + """ + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 + + Args: + w (`torch.Tensor`): + Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. + embedding_dim (`int`, *optional*, defaults to 512): + Dimension of the embeddings to generate. + dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): + Data type of the generated embeddings. + + Returns: + `torch.Tensor`: Embedding vectors with shape `(len(w), embedding_dim)`. + """ + assert len(w.shape) == 1 + w = w * 1000.0 + + half_dim = embedding_dim // 2 + emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) + emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) + emb = w.to(dtype)[:, None] * emb[None, :] + emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) + if embedding_dim % 2 == 1: # zero pad + emb = torch.nn.functional.pad(emb, (0, 1)) + assert emb.shape == (w.shape[0], embedding_dim) + return emb + + @property + def guidance_scale(self): + return self._guidance_scale + + @property + def guidance_rescale(self): + return self._guidance_rescale + + @property + def clip_skip(self): + return self._clip_skip + + # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) + # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` + # corresponds to doing no classifier free guidance. + @property + def do_classifier_free_guidance(self): + return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None + + @property + def cross_attention_kwargs(self): + return self._cross_attention_kwargs + + @property + def denoising_end(self): + return self._denoising_end + + @property + def num_timesteps(self): + return self._num_timesteps + + @property + def interrupt(self): + return self._interrupt + + @torch.no_grad() + @replace_example_docstring(EXAMPLE_DOC_STRING) + def __call__( + self, + prompt: Union[str, List[str]] = None, + prompt_2: Optional[Union[str, List[str]]] = None, + controlnet_image: Optional[PipelineImageInput] = None, + controlnet_scale: Optional[float] = 1.0, + height: Optional[int] = None, + width: Optional[int] = None, + num_inference_steps: int = 50, + timesteps: List[int] = None, + sigmas: List[float] = None, + denoising_end: Optional[float] = None, + guidance_scale: float = 5.0, + negative_prompt: Optional[Union[str, List[str]]] = None, + negative_prompt_2: Optional[Union[str, List[str]]] = None, + num_images_per_prompt: Optional[int] = 1, + eta: float = 0.0, + generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, + latents: Optional[torch.Tensor] = None, + prompt_embeds: Optional[torch.Tensor] = None, + negative_prompt_embeds: Optional[torch.Tensor] = None, + pooled_prompt_embeds: Optional[torch.Tensor] = None, + negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, + ip_adapter_image: Optional[PipelineImageInput] = None, + ip_adapter_image_embeds: Optional[List[torch.Tensor]] = None, + output_type: Optional[str] = "pil", + return_dict: bool = True, + cross_attention_kwargs: Optional[Dict[str, Any]] = None, + guidance_rescale: float = 0.0, + original_size: Optional[Tuple[int, int]] = None, + crops_coords_top_left: Tuple[int, int] = (0, 0), + target_size: Optional[Tuple[int, int]] = None, + negative_original_size: Optional[Tuple[int, int]] = None, + negative_crops_coords_top_left: Tuple[int, int] = (0, 0), + negative_target_size: Optional[Tuple[int, int]] = None, + clip_skip: Optional[int] = None, + callback_on_step_end: Optional[ + Union[Callable[[int, int, Dict], None], PipelineCallback, MultiPipelineCallbacks] + ] = None, + callback_on_step_end_tensor_inputs: List[str] = ["latents"], + **kwargs, + ): + r""" + Function invoked when calling the pipeline for generation. + + Args: + prompt (`str` or `List[str]`, *optional*): + The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. + instead. + prompt_2 (`str` or `List[str]`, *optional*): + The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is + used in both text-encoders + height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): + The height in pixels of the generated image. This is set to 1024 by default for the best results. + Anything below 512 pixels won't work well for + [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) + and checkpoints that are not specifically fine-tuned on low resolutions. + width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): + The width in pixels of the generated image. This is set to 1024 by default for the best results. + Anything below 512 pixels won't work well for + [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) + and checkpoints that are not specifically fine-tuned on low resolutions. + num_inference_steps (`int`, *optional*, defaults to 50): + The number of denoising steps. More denoising steps usually lead to a higher quality image at the + expense of slower inference. + timesteps (`List[int]`, *optional*): + Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument + in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is + passed will be used. Must be in descending order. + sigmas (`List[float]`, *optional*): + Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in + their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed + will be used. + denoising_end (`float`, *optional*): + When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be + completed before it is intentionally prematurely terminated. As a result, the returned sample will + still retain a substantial amount of noise as determined by the discrete timesteps selected by the + scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a + "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image + Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output) + guidance_scale (`float`, *optional*, defaults to 5.0): + Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). + `guidance_scale` is defined as `w` of equation 2. of [Imagen + Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > + 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, + usually at the expense of lower image quality. + negative_prompt (`str` or `List[str]`, *optional*): + The prompt or prompts not to guide the image generation. If not defined, one has to pass + `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is + less than `1`). + negative_prompt_2 (`str` or `List[str]`, *optional*): + The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and + `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders + num_images_per_prompt (`int`, *optional*, defaults to 1): + The number of images to generate per prompt. + eta (`float`, *optional*, defaults to 0.0): + Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to + [`schedulers.DDIMScheduler`], will be ignored for others. + generator (`torch.Generator` or `List[torch.Generator]`, *optional*): + One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) + to make generation deterministic. + latents (`torch.Tensor`, *optional*): + Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image + generation. Can be used to tweak the same generation with different prompts. If not provided, a latents + tensor will ge generated by sampling using the supplied random `generator`. + prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not + provided, text embeddings will be generated from `prompt` input argument. + negative_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt + weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input + argument. + pooled_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. + If not provided, pooled text embeddings will be generated from `prompt` input argument. + negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): + Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt + weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` + input argument. + ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. + ip_adapter_image_embeds (`List[torch.Tensor]`, *optional*): + Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of + IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should + contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not + provided, embeddings are computed from the `ip_adapter_image` input argument. + output_type (`str`, *optional*, defaults to `"pil"`): + The output format of the generate image. Choose between + [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. + return_dict (`bool`, *optional*, defaults to `True`): + Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead + of a plain tuple. + cross_attention_kwargs (`dict`, *optional*): + A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under + `self.processor` in + [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). + guidance_rescale (`float`, *optional*, defaults to 0.0): + Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are + Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of + [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). + Guidance rescale factor should fix overexposure when using zero terminal SNR. + original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): + If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled. + `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as + explained in section 2.2 of + [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). + crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): + `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position + `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting + `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of + [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). + target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): + For most cases, `target_size` should be set to the desired height and width of the generated image. If + not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in + section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). + negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): + To negatively condition the generation process based on a specific image resolution. Part of SDXL's + micro-conditioning as explained in section 2.2 of + [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more + information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. + negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): + To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's + micro-conditioning as explained in section 2.2 of + [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more + information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. + negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): + To negatively condition the generation process based on a target image resolution. It should be as same + as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of + [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more + information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. + callback_on_step_end (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*): + A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of + each denoising step during the inference. with the following arguments: `callback_on_step_end(self: + DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a + list of all tensors as specified by `callback_on_step_end_tensor_inputs`. + callback_on_step_end_tensor_inputs (`List`, *optional*): + The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list + will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the + `._callback_tensor_inputs` attribute of your pipeline class. + + Examples: + + Returns: + [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`: + [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a + `tuple`. When returning a tuple, the first element is a list with the generated images. + """ + + callback = kwargs.pop("callback", None) + callback_steps = kwargs.pop("callback_steps", None) + + if callback is not None: + deprecate( + "callback", + "1.0.0", + "Passing `callback` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", + ) + if callback_steps is not None: + deprecate( + "callback_steps", + "1.0.0", + "Passing `callback_steps` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", + ) + + if isinstance(callback_on_step_end, (PipelineCallback, MultiPipelineCallbacks)): + callback_on_step_end_tensor_inputs = callback_on_step_end.tensor_inputs + + # 0. Default height and width to unet + height = height or self.default_sample_size * self.vae_scale_factor + width = width or self.default_sample_size * self.vae_scale_factor + + original_size = original_size or (height, width) + target_size = target_size or (height, width) + + # 1. Check inputs. Raise error if not correct + self.check_inputs( + prompt, + prompt_2, + height, + width, + callback_steps, + negative_prompt, + negative_prompt_2, + prompt_embeds, + negative_prompt_embeds, + pooled_prompt_embeds, + negative_pooled_prompt_embeds, + ip_adapter_image, + ip_adapter_image_embeds, + callback_on_step_end_tensor_inputs, + ) + + self._guidance_scale = guidance_scale + self._guidance_rescale = guidance_rescale + self._clip_skip = clip_skip + self._cross_attention_kwargs = cross_attention_kwargs + self._denoising_end = denoising_end + self._interrupt = False + + # 2. Define call parameters + if prompt is not None and isinstance(prompt, str): + batch_size = 1 + elif prompt is not None and isinstance(prompt, list): + batch_size = len(prompt) + else: + batch_size = prompt_embeds.shape[0] + + device = self._execution_device + + # 3. Encode input prompt + lora_scale = ( + self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None + ) + + ( + prompt_embeds, + negative_prompt_embeds, + pooled_prompt_embeds, + negative_pooled_prompt_embeds, + ) = self.encode_prompt( + prompt=prompt, + prompt_2=prompt_2, + device=device, + num_images_per_prompt=num_images_per_prompt, + do_classifier_free_guidance=self.do_classifier_free_guidance, + negative_prompt=negative_prompt, + negative_prompt_2=negative_prompt_2, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_prompt_embeds, + pooled_prompt_embeds=pooled_prompt_embeds, + negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, + lora_scale=lora_scale, + clip_skip=self.clip_skip, + ) + + # 4. Prepare timesteps + timesteps, num_inference_steps = retrieve_timesteps( + self.scheduler, num_inference_steps, device, timesteps, sigmas + ) + + # 5. Prepare latent variables + num_channels_latents = self.unet.config.in_channels + latents = self.prepare_latents( + batch_size * num_images_per_prompt, + num_channels_latents, + height, + width, + prompt_embeds.dtype, + device, + generator, + latents, + ) + + # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline + extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) + + # 7. Prepare added time ids & embeddings + add_text_embeds = pooled_prompt_embeds + if self.text_encoder_2 is None: + text_encoder_projection_dim = int(pooled_prompt_embeds.shape[-1]) + else: + text_encoder_projection_dim = self.text_encoder_2.config.projection_dim + + add_time_ids = self._get_add_time_ids( + original_size, + crops_coords_top_left, + target_size, + dtype=prompt_embeds.dtype, + text_encoder_projection_dim=text_encoder_projection_dim, + ) + if negative_original_size is not None and negative_target_size is not None: + negative_add_time_ids = self._get_add_time_ids( + negative_original_size, + negative_crops_coords_top_left, + negative_target_size, + dtype=prompt_embeds.dtype, + text_encoder_projection_dim=text_encoder_projection_dim, + ) + else: + negative_add_time_ids = add_time_ids + + if self.do_classifier_free_guidance: + prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) + add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0) + add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0) + + prompt_embeds = prompt_embeds.to(device) + add_text_embeds = add_text_embeds.to(device) + add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1) + + if ip_adapter_image is not None or ip_adapter_image_embeds is not None: + image_embeds = self.prepare_ip_adapter_image_embeds( + ip_adapter_image, + ip_adapter_image_embeds, + device, + batch_size * num_images_per_prompt, + self.do_classifier_free_guidance, + ) + + if controlnet_image is not None and self.controlnet is not None: + controlnet_image = self.prepare_image( + controlnet_image, + width, + height, + batch_size, + num_images_per_prompt, + device, + self.controlnet.dtype, + do_classifier_free_guidance=self.do_classifier_free_guidance, + ) + # 8. Denoising loop + num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0) + + # 8.1 Apply denoising_end + if ( + self.denoising_end is not None + and isinstance(self.denoising_end, float) + and self.denoising_end > 0 + and self.denoising_end < 1 + ): + discrete_timestep_cutoff = int( + round( + self.scheduler.config.num_train_timesteps + - (self.denoising_end * self.scheduler.config.num_train_timesteps) + ) + ) + num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps))) + timesteps = timesteps[:num_inference_steps] + + # 9. Optionally get Guidance Scale Embedding + timestep_cond = None + if self.unet.config.time_cond_proj_dim is not None: + guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt) + timestep_cond = self.get_guidance_scale_embedding( + guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim + ).to(device=device, dtype=latents.dtype) + + self._num_timesteps = len(timesteps) + with self.progress_bar(total=num_inference_steps) as progress_bar: + for i, t in enumerate(timesteps): + if self.interrupt: + continue + + # expand the latents if we are doing classifier free guidance + latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents + + latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) + + # predict the noise residual + added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} + if ip_adapter_image is not None or ip_adapter_image_embeds is not None: + added_cond_kwargs["image_embeds"] = image_embeds + + unet_additional_args = {} + if self.controlnet is not None: + controls = self.controlnet( + controlnet_image, + t, + ) + + # This makes the effect of the controlnext much more stronger + # if do_classifier_free_guidance: + # scale = controlnet_output['scale'] + # scale = scale.repeat(batch_size*2)[:, None, None, None] + # scale[:batch_size] *= 0 + # controlnet_output['scale'] = scale + + controls['scale'] *= controlnet_scale + unet_additional_args["controls"] = controls + + noise_pred = self.unet( + latent_model_input, + t, + encoder_hidden_states=prompt_embeds, + timestep_cond=timestep_cond, + cross_attention_kwargs=self.cross_attention_kwargs, + added_cond_kwargs=added_cond_kwargs, + return_dict=False, + **unet_additional_args, + )[0] + + # perform guidance + if self.do_classifier_free_guidance: + noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) + noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) + + if self.do_classifier_free_guidance and self.guidance_rescale > 0.0: + # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf + noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=self.guidance_rescale) + + # compute the previous noisy sample x_t -> x_t-1 + latents_dtype = latents.dtype + latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] + if latents.dtype != latents_dtype: + if torch.backends.mps.is_available(): + # some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272 + latents = latents.to(latents_dtype) + + if callback_on_step_end is not None: + callback_kwargs = {} + for k in callback_on_step_end_tensor_inputs: + callback_kwargs[k] = locals()[k] + callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) + + latents = callback_outputs.pop("latents", latents) + prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) + negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) + add_text_embeds = callback_outputs.pop("add_text_embeds", add_text_embeds) + negative_pooled_prompt_embeds = callback_outputs.pop( + "negative_pooled_prompt_embeds", negative_pooled_prompt_embeds + ) + add_time_ids = callback_outputs.pop("add_time_ids", add_time_ids) + negative_add_time_ids = callback_outputs.pop("negative_add_time_ids", negative_add_time_ids) + + # call the callback, if provided + if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): + progress_bar.update() + if callback is not None and i % callback_steps == 0: + step_idx = i // getattr(self.scheduler, "order", 1) + callback(step_idx, t, latents) + + if XLA_AVAILABLE: + xm.mark_step() + + if not output_type == "latent": + # make sure the VAE is in float32 mode, as it overflows in float16 + needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast + + if needs_upcasting: + self.upcast_vae() + latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) + elif latents.dtype != self.vae.dtype: + if torch.backends.mps.is_available(): + # some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272 + self.vae = self.vae.to(latents.dtype) + + # unscale/denormalize the latents + # denormalize with the mean and std if available and not None + has_latents_mean = hasattr(self.vae.config, "latents_mean") and self.vae.config.latents_mean is not None + has_latents_std = hasattr(self.vae.config, "latents_std") and self.vae.config.latents_std is not None + if has_latents_mean and has_latents_std: + latents_mean = ( + torch.tensor(self.vae.config.latents_mean).view(1, 4, 1, 1).to(latents.device, latents.dtype) + ) + latents_std = ( + torch.tensor(self.vae.config.latents_std).view(1, 4, 1, 1).to(latents.device, latents.dtype) + ) + latents = latents * latents_std / self.vae.config.scaling_factor + latents_mean + else: + latents = latents / self.vae.config.scaling_factor + + image = self.vae.decode(latents, return_dict=False)[0] + + # cast back to fp16 if needed + if needs_upcasting: + self.vae.to(dtype=torch.float16) + else: + image = latents + + if not output_type == "latent": + # apply watermark if available + if self.watermark is not None: + image = self.watermark.apply_watermark(image) + + image = self.image_processor.postprocess(image, output_type=output_type) + + # Offload all models + self.maybe_free_model_hooks() + + if not return_dict: + return (image,) + + return StableDiffusionXLPipelineOutput(images=image) diff --git a/ControlNeXt-SDXL-Training/requirements.txt b/ControlNeXt-SDXL-Training/requirements.txt new file mode 100644 index 0000000..f408768 --- /dev/null +++ b/ControlNeXt-SDXL-Training/requirements.txt @@ -0,0 +1,11 @@ +torch +torchvision +accelerate +opencv-python +pillow +numpy +transformers +diffusers +safetensors +peft +xformers \ No newline at end of file diff --git a/ControlNeXt-SDXL-Training/train_controlnext.py b/ControlNeXt-SDXL-Training/train_controlnext.py new file mode 100644 index 0000000..284f1b4 --- /dev/null +++ b/ControlNeXt-SDXL-Training/train_controlnext.py @@ -0,0 +1,1449 @@ +#!/usr/bin/env python +# coding=utf-8 +# Copyright 2024 The HuggingFace Inc. team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and + +import argparse +import functools +import gc +import re +import logging +import math +import os +import random +import shutil +from contextlib import nullcontext +from pathlib import Path + +import accelerate +import numpy as np +import torch +import torch.nn.functional as F +import torch.utils.checkpoint +import transformers +from accelerate import Accelerator +from accelerate.logging import get_logger +from accelerate.utils import DistributedType, ProjectConfiguration, set_seed +from datasets import load_dataset +from huggingface_hub import create_repo, upload_folder +from packaging import version +from PIL import Image +from torchvision import transforms +from tqdm.auto import tqdm +from transformers import AutoTokenizer, PretrainedConfig + +import diffusers +from diffusers import ( + AutoencoderKL, + DDPMScheduler, + UniPCMultistepScheduler, +) +from diffusers.optimization import get_scheduler +from diffusers.utils import check_min_version, is_wandb_available, make_image_grid +from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card +from diffusers.utils.import_utils import is_torch_npu_available, is_xformers_available +from diffusers.utils.torch_utils import is_compiled_module + +from safetensors.torch import load_file, save_file +from pipeline.pipeline_controlnext import StableDiffusionXLControlNeXtPipeline +from models.controlnet import ControlNetModel +from models.unet import UNet2DConditionModel + +if is_wandb_available(): + import wandb + +# Will error if the minimal version of diffusers is not installed. Remove at your own risks. +# check_min_version("0.31.0.dev0") + +logger = get_logger(__name__) +if is_torch_npu_available(): + torch.npu.config.allow_internal_format = False + + +def log_validation(vae, unet, controlnet, args, accelerator, weight_dtype, step, is_final_validation=False): + logger.info("Running validation... ") + + pipeline = StableDiffusionXLControlNeXtPipeline.from_pretrained( + args.pretrained_model_name_or_path, + vae=vae, + unet=unet, + controlnet=controlnet, + safety_checker=None, + revision=args.revision, + variant=args.variant, + torch_dtype=weight_dtype, + ) + + pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config) + pipeline = pipeline.to(accelerator.device) + pipeline.set_progress_bar_config(disable=True) + + if args.enable_xformers_memory_efficient_attention: + pipeline.enable_xformers_memory_efficient_attention() + + if args.seed is None: + generator = None + else: + generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) + + if len(args.validation_image) == len(args.validation_prompt): + validation_images = args.validation_image + validation_prompts = args.validation_prompt + elif len(args.validation_image) == 1: + validation_images = args.validation_image * len(args.validation_prompt) + validation_prompts = args.validation_prompt + elif len(args.validation_prompt) == 1: + validation_images = args.validation_image + validation_prompts = args.validation_prompt * len(args.validation_image) + else: + raise ValueError( + "number of `args.validation_image` and `args.validation_prompt` should be checked in `parse_args`" + ) + + image_logs = [] + if is_final_validation or torch.backends.mps.is_available(): + autocast_ctx = nullcontext() + else: + autocast_ctx = torch.autocast(accelerator.device.type) + + for validation_prompt, validation_image in zip(validation_prompts, validation_images): + validation_image = Image.open(validation_image).convert("RGB") + validation_image = validation_image.resize((args.resolution, args.resolution)) + + images = [] + + for _ in range(args.num_validation_images): + with autocast_ctx: + image = pipeline( + prompt=validation_prompt, + controlnet_image=validation_image, + controlnet_scale_factor=args.controlnet_scale_factor, + num_inference_steps=20, + generator=generator, + width=args.resolution, + height=args.resolution, + ).images[0] + images.append(image) + + image_logs.append( + {"validation_image": validation_image, "images": images, "validation_prompt": validation_prompt} + ) + + sample_dir = os.path.join(args.output_dir, "samples", f"sample-{step}") + os.makedirs(sample_dir, exist_ok=True) + tracker_key = "test" if is_final_validation else "validation" + for tracker in accelerator.trackers: + if tracker.name == "tensorboard": + for log in image_logs: + images = log["images"] + validation_prompt = log["validation_prompt"] + validation_image = log["validation_image"] + + formatted_images = [] + + formatted_images.append(np.asarray(validation_image)) + + for image in images: + formatted_images.append(np.asarray(image)) + + formatted_images = np.stack(formatted_images) + + tracker.writer.add_images(validation_prompt, formatted_images, step, dataformats="NHWC") + elif tracker.name == "wandb": + formatted_images = [] + + for log in image_logs: + images = log["images"] + validation_prompt = log["validation_prompt"] + validation_image = log["validation_image"] + + formatted_images.append(wandb.Image(validation_image, caption="Controlnet conditioning")) + + for image in images: + image = wandb.Image(image, caption=validation_prompt) + formatted_images.append(image) + + tracker.log({tracker_key: formatted_images}) + else: + logger.warning(f"image logging not implemented for {tracker.name}") + + formatted_images = [] + formatted_images.append(validation_image) + for i, image in enumerate(images): + formatted_images.append(image) + image.save(os.path.join(sample_dir, f"image-{i}_{step}.png")) + image_grid = make_image_grid(formatted_images, 1, len(formatted_images)) + image_grid.save(os.path.join(sample_dir, f"grid_{step}.png")) + + del pipeline + gc.collect() + torch.cuda.empty_cache() + + return image_logs + + +def save_models(unet, controlnet, output_dir, args): + os.makedirs(output_dir, exist_ok=True) + unet_sd = unet.state_dict() + pattern = re.compile(args.unet_trainable_param_pattern) + unet_sd = {k: v for k, v in unet_sd.items() if pattern.match(k)} + save_file(unet_sd, os.path.join(output_dir, "unet.safetensors")) + save_file(controlnet.state_dict(), os.path.join(output_dir, "controlnet.safetensors")) + + +def import_model_class_from_model_name_or_path( + pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder" +): + text_encoder_config = PretrainedConfig.from_pretrained( + pretrained_model_name_or_path, subfolder=subfolder, revision=revision + ) + model_class = text_encoder_config.architectures[0] + + if model_class == "CLIPTextModel": + from transformers import CLIPTextModel + + return CLIPTextModel + elif model_class == "CLIPTextModelWithProjection": + from transformers import CLIPTextModelWithProjection + + return CLIPTextModelWithProjection + else: + raise ValueError(f"{model_class} is not supported.") + + +def save_model_card(repo_id: str, image_logs=None, base_model=str, repo_folder=None): + img_str = "" + if image_logs is not None: + img_str = "You can find some example images below.\n\n" + for i, log in enumerate(image_logs): + images = log["images"] + validation_prompt = log["validation_prompt"] + validation_image = log["validation_image"] + validation_image.save(os.path.join(repo_folder, "image_control.png")) + img_str += f"prompt: {validation_prompt}\n" + images = [validation_image] + images + make_image_grid(images, 1, len(images)).save(os.path.join(repo_folder, f"images_{i}.png")) + img_str += f"![images_{i})](./images_{i}.png)\n" + + model_description = f""" +# controlnet-{repo_id} + +These are controlnet weights trained on {base_model} with new type of conditioning. +{img_str} +""" + + model_card = load_or_create_model_card( + repo_id_or_path=repo_id, + from_training=True, + license="openrail++", + base_model=base_model, + model_description=model_description, + inference=True, + ) + + tags = [ + "stable-diffusion-xl", + "stable-diffusion-xl-diffusers", + "text-to-image", + "diffusers", + "controlnet", + "diffusers-training", + ] + model_card = populate_model_card(model_card, tags=tags) + + model_card.save(os.path.join(repo_folder, "README.md")) + + +class LossRecorder: + r""" + Class to record better losses. + """ + + def __init__(self, gamma=0.9, max_window=None): + self.losses = [] + self.gamma = gamma + self.ema = 0 + self.t = 0 + self.max_window = max_window + + def add(self, *, loss: float) -> None: + self.losses.append(loss) + if self.max_window is not None and len(self.losses) > self.max_window: + self.losses.pop(0) + self.t += 1 + ema = self.ema * self.gamma + loss * (1 - self.gamma) + ema_hat = ema / (1 - self.gamma ** self.t) if self.t < 500 else ema + self.ema = ema_hat + + def moving_average(self, *, window: int) -> float: + if len(self.losses) < window: + window = len(self.losses) + return sum(self.losses[-window:]) / window + + +def parse_args(input_args=None): + parser = argparse.ArgumentParser(description="Simple example of a ControlNet training script.") + parser.add_argument( + "--pretrained_model_name_or_path", + type=str, + default=None, + required=True, + help="Path to pretrained model or model identifier from huggingface.co/models.", + ) + parser.add_argument( + "--pretrained_vae_model_name_or_path", + type=str, + default=None, + help="Path to an improved VAE to stabilize training. For more details check out: https://github.com/huggingface/diffusers/pull/4038.", + ) + parser.add_argument( + "--controlnet_model_name_or_path", + type=str, + default=None, + help="Path to pretrained controlnet model or model identifier from huggingface.co/models." + " If not specified controlnet weights are initialized from unet.", + ) + parser.add_argument( + "--variant", + type=str, + default=None, + help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", + ) + parser.add_argument( + "--revision", + type=str, + default=None, + required=False, + help="Revision of pretrained model identifier from huggingface.co/models.", + ) + parser.add_argument( + "--use_safetensors", + action="store_true", + help="Whether or not to set use_safetensors=True for loading the pretrained model.", + ) + parser.add_argument( + "--tokenizer_name", + type=str, + default=None, + help="Pretrained tokenizer name or path if not the same as model_name", + ) + parser.add_argument( + "--output_dir", + type=str, + default="controlnet-model", + help="The output directory where the model predictions and checkpoints will be written.", + ) + parser.add_argument( + "--cache_dir", + type=str, + default=None, + help="The directory where the downloaded models and datasets will be stored.", + ) + parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") + parser.add_argument( + "--resolution", + type=int, + default=512, + help=( + "The resolution for input images, all the images in the train/validation dataset will be resized to this" + " resolution" + ), + ) + parser.add_argument( + "--controlnet_scale_factor", + type=float, + default=1.0, + help=( + "The scale factor for the controlnet. This is used to scale the controlnet output before adding it to the unet output." + " For depth control, we recommend setting this to 1.0." + " For canny control, we recommend setting this to 0.35." + ) + ) + parser.add_argument( + "--crops_coords_top_left_h", + type=int, + default=0, + help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), + ) + parser.add_argument( + "--crops_coords_top_left_w", + type=int, + default=0, + help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."), + ) + parser.add_argument( + "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." + ) + parser.add_argument("--num_train_epochs", type=int, default=25) + parser.add_argument( + "--max_train_steps", + type=int, + default=None, + help="Total number of training steps to perform. If provided, overrides num_train_epochs.", + ) + parser.add_argument( + "--checkpointing_steps", + type=int, + default=500, + help=( + "Save a checkpoint of the training state every X updates. Checkpoints can be used for resuming training via `--resume_from_checkpoint`. " + "In the case that the checkpoint is better than the final trained model, the checkpoint can also be used for inference." + "Using a checkpoint for inference requires separate loading of the original pipeline and the individual checkpointed model components." + "See https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint for step by step" + "instructions." + ), + ) + parser.add_argument( + "--checkpoints_total_limit", + type=int, + default=None, + help=("Max number of checkpoints to store."), + ) + parser.add_argument( + "--resume_from_checkpoint", + type=str, + default=None, + help=( + "Whether training should be resumed from a previous checkpoint. Use a path saved by" + ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' + ), + ) + parser.add_argument( + "--gradient_accumulation_steps", + type=int, + default=1, + help="Number of updates steps to accumulate before performing a backward/update pass.", + ) + parser.add_argument( + "--gradient_checkpointing", + action="store_true", + help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", + ) + parser.add_argument( + "--learning_rate", + type=float, + default=1e-5, + help="Initial learning rate (after the potential warmup period) to use.", + ) + parser.add_argument( + "--unet_trainable_param_pattern", + type=str, + default=r".*attn2.*to_out.*", + help="Regex pattern to match the name of trainable parameters of the UNet.", + ) + parser.add_argument( + "--learning_rate_controlnet", + type=float, + default=1e-4, + help="Initial learning rate (after the potential warmup period) to use for the controlnet.", + ) + parser.add_argument( + "--scale_lr", + action="store_true", + default=False, + help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", + ) + parser.add_argument( + "--lr_scheduler", + type=str, + default="constant_with_warmup", + help=( + 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' + ' "constant", "constant_with_warmup"]' + ), + ) + parser.add_argument( + "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." + ) + parser.add_argument( + "--lr_num_cycles", + type=int, + default=1, + help="Number of hard resets of the lr in cosine_with_restarts scheduler.", + ) + parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") + parser.add_argument( + "--dataloader_num_workers", + type=int, + default=0, + help=( + "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." + ), + ) + parser.add_argument( + "--optimizer_type", + type=str, + default="adafactor", + help="The optimizer type to use. Choose between ['adamw', 'adafactor']", + ) + parser.add_argument( + "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." + ) + parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") + parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") + parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") + parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") + parser.add_argument("--adafactor_relative_step", type=bool, default=False, help="Relative step size for Adafactor.") + parser.add_argument("--adafactor_scale_parameter", type=bool, default=False, help="Scale the initial parameter.") + parser.add_argument("--adafactor_warmup_init", type=bool, default=False, help="Warmup the initial parameter.") + parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") + parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") + parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") + parser.add_argument( + "--hub_model_id", + type=str, + default=None, + help="The name of the repository to keep in sync with the local `output_dir`.", + ) + parser.add_argument( + "--logging_dir", + type=str, + default="logs", + help=( + "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" + " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." + ), + ) + parser.add_argument( + "--allow_tf32", + action="store_true", + help=( + "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" + " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" + ), + ) + parser.add_argument( + "--report_to", + type=str, + default="tensorboard", + help=( + 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' + ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' + ), + ) + parser.add_argument( + "--mixed_precision", + type=str, + default=None, + choices=["no", "fp16", "bf16"], + help=( + "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" + " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" + " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." + ), + ) + parser.add_argument( + "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." + ) + parser.add_argument( + "--enable_npu_flash_attention", action="store_true", help="Whether or not to use npu flash attention." + ) + parser.add_argument( + "--set_grads_to_none", + action="store_true", + help=( + "Save more memory by using setting grads to None instead of zero. Be aware, that this changes certain" + " behaviors, so disable this argument if it causes any problems. More info:" + " https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html" + ), + ) + parser.add_argument( + "--dataset_name", + type=str, + default=None, + help=( + "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," + " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," + " or to a folder containing files that 🤗 Datasets can understand." + ), + ) + parser.add_argument( + "--dataset_config_name", + type=str, + default=None, + help="The config of the Dataset, leave as None if there's only one config.", + ) + parser.add_argument( + "--train_data_dir", + type=str, + default=None, + help=( + "A folder containing the training data. Folder contents must follow the structure described in" + " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" + " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." + ), + ) + parser.add_argument( + "--image_column", type=str, default="image", help="The column of the dataset containing the target image." + ) + parser.add_argument( + "--conditioning_image_column", + type=str, + default="conditioning_image", + help="The column of the dataset containing the controlnet conditioning image.", + ) + parser.add_argument( + "--caption_column", + type=str, + default="text", + help="The column of the dataset containing a caption or a list of captions.", + ) + parser.add_argument( + "--max_train_samples", + type=int, + default=None, + help=( + "For debugging purposes or quicker training, truncate the number of training examples to this " + "value if set." + ), + ) + parser.add_argument( + "--proportion_empty_prompts", + type=float, + default=0, + help="Proportion of image prompts to be replaced with empty strings. Defaults to 0 (no prompt replacement).", + ) + parser.add_argument( + "--validation_prompt", + type=str, + default=None, + nargs="+", + help=( + "A set of prompts evaluated every `--validation_steps` and logged to `--report_to`." + " Provide either a matching number of `--validation_image`s, a single `--validation_image`" + " to be used with all prompts, or a single prompt that will be used with all `--validation_image`s." + ), + ) + parser.add_argument( + "--validation_image", + type=str, + default=None, + nargs="+", + help=( + "A set of paths to the controlnet conditioning image be evaluated every `--validation_steps`" + " and logged to `--report_to`. Provide either a matching number of `--validation_prompt`s, a" + " a single `--validation_prompt` to be used with all `--validation_image`s, or a single" + " `--validation_image` that will be used with all `--validation_prompt`s." + ), + ) + parser.add_argument( + "--num_validation_images", + type=int, + default=4, + help="Number of images to be generated for each `--validation_image`, `--validation_prompt` pair", + ) + parser.add_argument( + "--validation_steps", + type=int, + default=100, + help=( + "Run validation every X steps. Validation consists of running the prompt" + " `args.validation_prompt` multiple times: `args.num_validation_images`" + " and logging the images." + ), + ) + parser.add_argument( + "--tracker_project_name", + type=str, + default="sd_xl_train_controlnet", + help=( + "The `project_name` argument passed to Accelerator.init_trackers for" + " more information see https://huggingface.co/docs/accelerate/v0.17.0/en/package_reference/accelerator#accelerate.Accelerator" + ), + ) + + if input_args is not None: + args = parser.parse_args(input_args) + else: + args = parser.parse_args() + + if args.dataset_name is None and args.train_data_dir is None: + raise ValueError("Specify either `--dataset_name` or `--train_data_dir`") + + if args.dataset_name is not None and args.train_data_dir is not None: + raise ValueError("Specify only one of `--dataset_name` or `--train_data_dir`") + + if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1: + raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].") + + if args.validation_prompt is not None and args.validation_image is None: + raise ValueError("`--validation_image` must be set if `--validation_prompt` is set") + + if args.validation_prompt is None and args.validation_image is not None: + raise ValueError("`--validation_prompt` must be set if `--validation_image` is set") + + if ( + args.validation_image is not None + and args.validation_prompt is not None + and len(args.validation_image) != 1 + and len(args.validation_prompt) != 1 + and len(args.validation_image) != len(args.validation_prompt) + ): + raise ValueError( + "Must provide either 1 `--validation_image`, 1 `--validation_prompt`," + " or the same number of `--validation_prompt`s and `--validation_image`s" + ) + + if args.resolution % 8 != 0: + raise ValueError( + "`--resolution` must be divisible by 8 for consistently sized encoded images between the VAE and the controlnet encoder." + ) + + return args + + +def get_train_dataset(args, accelerator): + # Get the datasets: you can either provide your own training and evaluation files (see below) + # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). + + # In distributed training, the load_dataset function guarantees that only one local process can concurrently + # download the dataset. + while True: + try: + if args.dataset_name is not None: + # Downloading and loading a dataset from the hub. + dataset = load_dataset( + args.dataset_name, + args.dataset_config_name, + cache_dir=args.cache_dir, + ) + else: + if args.train_data_dir is not None: + dataset = load_dataset( + args.train_data_dir, + cache_dir=args.cache_dir, + ) + # See more about loading custom images at + # https://huggingface.co/docs/datasets/v2.0.0/en/dataset_script + break + except Exception as e: + logger.error(f"Error loading dataset: {e}") + logger.error("Retry...") + continue + + # Preprocessing the datasets. + # We need to tokenize inputs and targets. + column_names = dataset["train"].column_names + + # 6. Get the column names for input/target. + if args.image_column is None: + image_column = column_names[0] + logger.info(f"image column defaulting to {image_column}") + else: + image_column = args.image_column + if image_column not in column_names: + raise ValueError( + f"`--image_column` value '{args.image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" + ) + + if args.caption_column is None: + caption_column = column_names[1] + logger.info(f"caption column defaulting to {caption_column}") + else: + caption_column = args.caption_column + if caption_column not in column_names: + raise ValueError( + f"`--caption_column` value '{args.caption_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" + ) + + if args.conditioning_image_column is None: + conditioning_image_column = column_names[2] + logger.info(f"conditioning image column defaulting to {conditioning_image_column}") + else: + conditioning_image_column = args.conditioning_image_column + if conditioning_image_column not in column_names: + raise ValueError( + f"`--conditioning_image_column` value '{args.conditioning_image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" + ) + + with accelerator.main_process_first(): + train_dataset = dataset["train"].shuffle(seed=args.seed) + if args.max_train_samples is not None: + train_dataset = train_dataset.select(range(args.max_train_samples)) + return train_dataset + + +# Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt +def encode_prompt(prompt_batch, text_encoders, tokenizers, proportion_empty_prompts, is_train=True): + prompt_embeds_list = [] + + captions = [] + for caption in prompt_batch: + if random.random() < proportion_empty_prompts: + captions.append("") + elif isinstance(caption, str): + captions.append(caption) + elif isinstance(caption, (list, np.ndarray)): + # take a random caption if there are multiple + captions.append(random.choice(caption) if is_train else caption[0]) + + with torch.no_grad(): + for tokenizer, text_encoder in zip(tokenizers, text_encoders): + text_inputs = tokenizer( + captions, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder( + text_input_ids.to(text_encoder.device), + output_hidden_states=True, + ) + + # We are only ALWAYS interested in the pooled output of the final text encoder + pooled_prompt_embeds = prompt_embeds[0] + prompt_embeds = prompt_embeds.hidden_states[-2] + bs_embed, seq_len, _ = prompt_embeds.shape + prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1) + prompt_embeds_list.append(prompt_embeds) + + prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) + pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1) + return prompt_embeds, pooled_prompt_embeds + + +def prepare_train_dataset(dataset, accelerator): + image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] + ) + + conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] + ) + + def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[args.image_column]] + images = [image_transforms(image) for image in images] + + conditioning_images = [image.convert("RGB") for image in examples[args.conditioning_image_column]] + conditioning_images = [conditioning_image_transforms(image) for image in conditioning_images] + + examples["pixel_values"] = images + examples["conditioning_pixel_values"] = conditioning_images + + return examples + + with accelerator.main_process_first(): + dataset = dataset.with_transform(preprocess_train) + + return dataset + + +def collate_fn(examples): + pixel_values = torch.stack([example["pixel_values"] for example in examples]) + pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() + + conditioning_pixel_values = torch.stack([example["conditioning_pixel_values"] for example in examples]) + conditioning_pixel_values = conditioning_pixel_values.to(memory_format=torch.contiguous_format).float() + + prompt_ids = torch.stack([torch.tensor(example["prompt_embeds"]) for example in examples]) + + add_text_embeds = torch.stack([torch.tensor(example["text_embeds"]) for example in examples]) + add_time_ids = torch.stack([torch.tensor(example["time_ids"]) for example in examples]) + + return { + "pixel_values": pixel_values, + "conditioning_pixel_values": conditioning_pixel_values, + "prompt_ids": prompt_ids, + "unet_added_conditions": {"text_embeds": add_text_embeds, "time_ids": add_time_ids}, + } + + +def patch_accelerator_for_fp16_training(accelerator): + org_unscale_grads = accelerator.scaler._unscale_grads_ + + def _unscale_grads_replacer(optimizer, inv_scale, found_inf, allow_fp16): + return org_unscale_grads(optimizer, inv_scale, found_inf, True) + + accelerator.scaler._unscale_grads_ = _unscale_grads_replacer + + +def main(args): + if args.report_to == "wandb" and args.hub_token is not None: + raise ValueError( + "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." + " Please use `huggingface-cli login` to authenticate with the Hub." + ) + + logging_dir = Path(args.output_dir, args.logging_dir) + + if torch.backends.mps.is_available() and args.mixed_precision == "bf16": + # due to pytorch#99272, MPS does not yet support bfloat16. + raise ValueError( + "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead." + ) + + accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) + + accelerator = Accelerator( + gradient_accumulation_steps=args.gradient_accumulation_steps, + mixed_precision=args.mixed_precision, + log_with=args.report_to, + project_config=accelerator_project_config, + ) + + # Disable AMP for MPS. + if torch.backends.mps.is_available(): + accelerator.native_amp = False + + # Make one log on every process with the configuration for debugging. + logging.basicConfig( + format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", + datefmt="%m/%d/%Y %H:%M:%S", + level=logging.INFO, + ) + logger.info(accelerator.state, main_process_only=False) + if accelerator.is_local_main_process: + transformers.utils.logging.set_verbosity_warning() + diffusers.utils.logging.set_verbosity_info() + else: + transformers.utils.logging.set_verbosity_error() + diffusers.utils.logging.set_verbosity_error() + + # If passed along, set the training seed now. + if args.seed is not None: + set_seed(args.seed) + + # Handle the repository creation + if accelerator.is_main_process: + if args.output_dir is not None: + os.makedirs(args.output_dir, exist_ok=True) + + if args.push_to_hub: + repo_id = create_repo( + repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token + ).repo_id + + # Load the tokenizers + tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer_2", + revision=args.revision, + use_fast=False, + ) + + # import correct text encoder classes + text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision + ) + text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" + ) + + # Load scheduler and models + noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") + text_encoder_one = text_encoder_cls_one.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant + ) + text_encoder_two = text_encoder_cls_two.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision, variant=args.variant + ) + vae_path = ( + args.pretrained_model_name_or_path + if args.pretrained_vae_model_name_or_path is None + else args.pretrained_vae_model_name_or_path + ) + vae = AutoencoderKL.from_pretrained( + vae_path, + subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None, + revision=args.revision if args.pretrained_vae_model_name_or_path is None else None, + variant=args.variant if args.pretrained_vae_model_name_or_path is None else None, + ) + unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant, use_safetensors=args.use_safetensors, + ) + + if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel() + controlnet.load_state_dict(load_file(args.controlnet_model_name_or_path)) + else: + logger.info("Initializing controlnet weights from scratch") + controlnet = ControlNetModel() + + def unwrap_model(model): + model = accelerator.unwrap_model(model) + model = model._orig_mod if is_compiled_module(model) else model + return model + + vae.requires_grad_(False) + text_encoder_one.requires_grad_(False) + text_encoder_two.requires_grad_(False) + + if args.enable_npu_flash_attention: + if is_torch_npu_available(): + logger.info("npu flash attention enabled.") + unet.enable_npu_flash_attention() + else: + raise ValueError("npu flash attention requires torch_npu extensions and is supported only on npu devices.") + + if args.enable_xformers_memory_efficient_attention: + if is_xformers_available(): + import xformers + + xformers_version = version.parse(xformers.__version__) + if xformers_version == version.parse("0.0.16"): + logger.warning( + "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." + ) + unet.enable_xformers_memory_efficient_attention() + controlnet.enable_xformers_memory_efficient_attention() + else: + raise ValueError("xformers is not available. Make sure it is installed correctly") + + if args.gradient_checkpointing: + unet.enable_gradient_checkpointing() + controlnet.enable_gradient_checkpointing() + + # Check that all trainable models are in full precision + low_precision_error_string = ( + " Please make sure to always have all model weights in full float32 precision when starting training - even if" + " doing mixed precision training, copy of the weights should still be float32." + ) + + if unwrap_model(controlnet).dtype != torch.float32: + raise ValueError( + f"Controlnet loaded as datatype {unwrap_model(controlnet).dtype}. {low_precision_error_string}" + ) + + # Enable TF32 for faster training on Ampere GPUs, + # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices + if args.allow_tf32: + torch.backends.cuda.matmul.allow_tf32 = True + + if args.scale_lr: + args.learning_rate = ( + args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes + ) + args.learning_rate_controlnet = ( + args.learning_rate_controlnet * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes + ) + + # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs + if args.optimizer_type.lower() == "adamw": + if args.use_8bit_adam: + try: + import bitsandbytes as bnb + except ImportError: + raise ImportError( + "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." + ) + + optimizer_class = bnb.optim.AdamW8bit + else: + optimizer_class = torch.optim.AdamW + optimizer_kwargs = dict( + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, + ) + elif args.optimizer_type.lower() == "adafactor": + optimizer_class = transformers.optimization.Adafactor + optimizer_kwargs = dict( + relative_step=args.adafactor_relative_step, + scale_parameter=args.adafactor_scale_parameter, + warmup_init=args.adafactor_warmup_init, + ) + else: + raise ValueError(f"Optimizer type {args.optimizer_type} not supported.") + + # Optimizer creation + controlnet.train() + controlnet.requires_grad_(True) + params_to_optimize = [{'params': list(controlnet.parameters()), 'lr': args.learning_rate_controlnet}] + logger.info(f"Number of trainable parameters in controlnet: {sum(p.numel() for p in controlnet.parameters() if p.requires_grad)}") + + unet.train() + unet.requires_grad_(True) + unet_params = [] + pattern = re.compile(args.unet_trainable_param_pattern) + for name, param in unet.named_parameters(): + if pattern.match(name): + param.requires_grad = True + unet_params.append(param) + else: + param.requires_grad = False + logger.info(f"Number of trainable parameters in unet: {sum(p.numel() for p in unet.parameters() if p.requires_grad)}") + params_to_optimize.append({'params': unet_params, 'lr': args.learning_rate}) + optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + **optimizer_kwargs, + ) + + # For mixed precision training we cast the text_encoder and vae weights to half-precision + # as these models are only used for inference, keeping weights in full precision is not required. + weight_dtype = torch.float32 + if accelerator.mixed_precision == "fp16": + weight_dtype = torch.float16 + elif accelerator.mixed_precision == "bf16": + weight_dtype = torch.bfloat16 + + # Move vae, unet and text_encoder to device and cast to weight_dtype + # The VAE is in float32 to avoid NaN losses. + if args.pretrained_vae_model_name_or_path is not None: + vae.to(accelerator.device, dtype=weight_dtype) + else: + vae.to(accelerator.device, dtype=torch.float32) + unet.to(accelerator.device, dtype=weight_dtype) + controlnet = controlnet.to(accelerator.device, dtype=torch.float32) + text_encoder_one.to(accelerator.device, dtype=weight_dtype) + text_encoder_two.to(accelerator.device, dtype=weight_dtype) + + # Here, we compute not just the text embeddings but also the additional embeddings + # needed for the SD XL UNet to operate. + def compute_embeddings(batch, proportion_empty_prompts, text_encoders, tokenizers, is_train=True): + original_size = (args.resolution, args.resolution) + target_size = (args.resolution, args.resolution) + crops_coords_top_left = (args.crops_coords_top_left_h, args.crops_coords_top_left_w) + prompt_batch = batch[args.caption_column] + + prompt_embeds, pooled_prompt_embeds = encode_prompt( + prompt_batch, text_encoders, tokenizers, proportion_empty_prompts, is_train + ) + add_text_embeds = pooled_prompt_embeds + + # Adapted from pipeline.StableDiffusionXLPipeline._get_add_time_ids + add_time_ids = list(original_size + crops_coords_top_left + target_size) + add_time_ids = torch.tensor([add_time_ids]) + + prompt_embeds = prompt_embeds.to(accelerator.device) + add_text_embeds = add_text_embeds.to(accelerator.device) + add_time_ids = add_time_ids.repeat(len(prompt_batch), 1) + add_time_ids = add_time_ids.to(accelerator.device, dtype=prompt_embeds.dtype) + unet_added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} + + return {"prompt_embeds": prompt_embeds, **unet_added_cond_kwargs} + + # Let's first compute all the embeddings so that we can free up the text encoders + # from memory. + text_encoders = [text_encoder_one, text_encoder_two] + tokenizers = [tokenizer_one, tokenizer_two] + train_dataset = get_train_dataset(args, accelerator) + compute_embeddings_fn = functools.partial( + compute_embeddings, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + ) + with accelerator.main_process_first(): + from datasets.fingerprint import Hasher + + # fingerprint used by the cache for the other processes to load the result + # details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401 + new_fingerprint = Hasher.hash(args) + train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) + + del text_encoders, tokenizers + gc.collect() + torch.cuda.empty_cache() + + # Then get the training dataset ready to be passed to the dataloader. + train_dataset = prepare_train_dataset(train_dataset, accelerator) + + train_dataloader = torch.utils.data.DataLoader( + train_dataset, + shuffle=True, + collate_fn=collate_fn, + batch_size=args.train_batch_size, + num_workers=args.dataloader_num_workers, + ) + + # Scheduler and math around the number of training steps. + # Check the PR https://github.com/huggingface/diffusers/pull/8312 for detailed explanation. + num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes + if args.max_train_steps is None: + len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes) + num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps) + num_training_steps_for_scheduler = ( + args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes + ) + else: + num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes + + lr_scheduler = get_scheduler( + args.lr_scheduler, + optimizer=optimizer, + num_warmup_steps=num_warmup_steps_for_scheduler, + num_training_steps=num_training_steps_for_scheduler, + num_cycles=args.lr_num_cycles, + power=args.lr_power, + ) + + # Prepare everything with our `accelerator`. + unet, controlnet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( + unet, controlnet, optimizer, train_dataloader, lr_scheduler + ) + + patch_accelerator_for_fp16_training(accelerator) + + # We need to recalculate our total training steps as the size of the training dataloader may have changed. + num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) + if args.max_train_steps is None: + args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch + if num_training_steps_for_scheduler != args.max_train_steps * accelerator.num_processes: + logger.warning( + f"The length of the 'train_dataloader' after 'accelerator.prepare' ({len(train_dataloader)}) does not match " + f"the expected length ({len_train_dataloader_after_sharding}) when the learning rate scheduler was created. " + f"This inconsistency may result in the learning rate scheduler not functioning properly." + ) + # Afterwards we recalculate our number of training epochs + args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) + + # We need to initialize the trackers we use, and also store our configuration. + # The trackers initializes automatically on the main process. + if accelerator.is_main_process: + tracker_config = dict(vars(args)) + + # tensorboard cannot handle list types for config + tracker_config.pop("validation_prompt") + tracker_config.pop("validation_image") + + accelerator.init_trackers(args.tracker_project_name, config=tracker_config) + + # Train! + total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps + + logger.info("***** Running training *****") + logger.info(f" Num examples = {len(train_dataset)}") + logger.info(f" Num batches each epoch = {len(train_dataloader)}") + logger.info(f" Num Epochs = {args.num_train_epochs}") + logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") + logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") + logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") + logger.info(f" Total optimization steps = {args.max_train_steps}") + global_step = 0 + first_epoch = 0 + + # Potentially load in the weights and states from a previous save + if args.resume_from_checkpoint: + if args.resume_from_checkpoint != "latest": + path = os.path.basename(args.resume_from_checkpoint) + else: + # Get the most recent checkpoint + dirs = os.listdir(args.output_dir) + dirs = [d for d in dirs if d.startswith("checkpoint")] + dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) + path = dirs[-1] if len(dirs) > 0 else None + + if path is None: + accelerator.print( + f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." + ) + args.resume_from_checkpoint = None + initial_global_step = 0 + else: + accelerator.print(f"Resuming from checkpoint {path}") + accelerator.load_state(os.path.join(args.output_dir, path)) + global_step = int(path.split("-")[1]) + + initial_global_step = global_step + first_epoch = global_step // num_update_steps_per_epoch + else: + initial_global_step = 0 + + progress_bar = tqdm( + range(0, args.max_train_steps), + initial=initial_global_step, + desc="Steps", + # Only show the progress bar once on each machine. + disable=not accelerator.is_local_main_process, + ) + loss_recorder = LossRecorder(gamma=0.9) + + image_logs = None + for epoch in range(first_epoch, args.num_train_epochs): + for step, batch in enumerate(train_dataloader): + with accelerator.accumulate(unet, controlnet): + # Convert images to latent space + if args.pretrained_vae_model_name_or_path is not None: + pixel_values = batch["pixel_values"].to(dtype=weight_dtype) + else: + pixel_values = batch["pixel_values"] + latents = vae.encode(pixel_values).latent_dist.sample() + latents = latents * vae.config.scaling_factor + if args.pretrained_vae_model_name_or_path is None: + latents = latents.to(weight_dtype) + + # Sample noise that we'll add to the latents + noise = torch.randn_like(latents) + bsz = latents.shape[0] + + # Sample a random timestep for each image + timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) + timesteps = timesteps.long() + + # Add noise to the latents according to the noise magnitude at each timestep + # (this is the forward diffusion process) + noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) + + # ControlNet conditioning. + controlnet_image = batch["conditioning_pixel_values"].to(accelerator.device, dtype=controlnet.dtype) + controls = controlnet( + controlnet_image, + timesteps, + ) + controls['scale'] *= args.controlnet_scale_factor + + # Predict the noise residual + with accelerator.autocast(): + model_pred = unet( + noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + controls=controls, + return_dict=False, + )[0] + + # Get the target for loss depending on the prediction type + if noise_scheduler.config.prediction_type == "epsilon": + target = noise + elif noise_scheduler.config.prediction_type == "v_prediction": + target = noise_scheduler.get_velocity(latents, noise, timesteps) + else: + raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") + + accelerator.backward(loss) + if accelerator.sync_gradients: + params_to_clip = [] + for p in params_to_optimize: + params_to_clip.extend(p["params"]) + accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) + optimizer.step() + lr_scheduler.step() + optimizer.zero_grad(set_to_none=args.set_grads_to_none) + + # Checks if the accelerator has performed an optimization step behind the scenes + if accelerator.sync_gradients: + progress_bar.update(1) + global_step += 1 + + # DeepSpeed requires saving weights on every device; saving weights only on the main process would cause issues. + if accelerator.distributed_type == DistributedType.DEEPSPEED or accelerator.is_main_process: + if global_step % args.checkpointing_steps == 0: + # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` + if args.checkpoints_total_limit is not None: + checkpoints = os.listdir(args.output_dir) + checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] + checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) + + # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints + if len(checkpoints) >= args.checkpoints_total_limit: + num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 + removing_checkpoints = checkpoints[0:num_to_remove] + + logger.info( + f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" + ) + logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") + + for removing_checkpoint in removing_checkpoints: + removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) + shutil.rmtree(removing_checkpoint) + + save_path = os.path.join(args.output_dir, "checkpoints", f"checkpoint-{global_step}") + save_models( + accelerator.unwrap_model(unet), + accelerator.unwrap_model(controlnet), + save_path, + args, + ) + logger.info(f"Saved state to {save_path}") + + if args.validation_prompt is not None and global_step % args.validation_steps == 0: + image_logs = log_validation( + vae=vae, + unet=accelerator.unwrap_model(unet), + controlnet=accelerator.unwrap_model(controlnet), + args=args, + accelerator=accelerator, + weight_dtype=weight_dtype, + step=global_step, + ) + + loss = loss.detach().item() + loss_recorder.add(loss=loss) + loss_avr: float = loss_recorder.moving_average(window=1000) + loss_ema: float = loss_recorder.ema + logs = {"loss/step": loss, 'loss_avr/step': loss_avr, 'loss_ema/step': loss_ema, 'lr/step': lr_scheduler.get_last_lr()[0]} + progress_bar.set_postfix(**logs) + accelerator.log(logs, step=global_step) + + if global_step >= args.max_train_steps: + break + + # Create the pipeline using using the trained modules and save it. + accelerator.wait_for_everyone() + if accelerator.is_main_process: + save_path = os.path.join(args.output_dir, "checkpoints", "final") + save_models( + accelerator.unwrap_model(unet), + accelerator.unwrap_model(controlnet), + save_path, + args, + ) + + # Run a final round of validation. + # Setting `vae`, `unet`, and `controlnet` to None to load automatically from `args.output_dir`. + image_logs = None + if args.validation_prompt is not None: + image_logs = log_validation( + vae=vae, + unet=accelerator.unwrap_model(unet), + controlnet=accelerator.unwrap_model(controlnet), + args=args, + accelerator=accelerator, + weight_dtype=weight_dtype, + step=global_step, + is_final_validation=True, + ) + + if args.push_to_hub: + save_model_card( + repo_id, + image_logs=image_logs, + base_model=args.pretrained_model_name_or_path, + repo_folder=args.output_dir, + ) + upload_folder( + repo_id=repo_id, + folder_path=args.output_dir, + commit_message="End of training", + ignore_patterns=["step_*", "epoch_*"], + ) + + accelerator.end_training() + + +if __name__ == "__main__": + args = parse_args() + main(args) diff --git a/ControlNeXt-SDXL-Training/utils/preprocess.py b/ControlNeXt-SDXL-Training/utils/preprocess.py new file mode 100644 index 0000000..e79b084 --- /dev/null +++ b/ControlNeXt-SDXL-Training/utils/preprocess.py @@ -0,0 +1,38 @@ +import cv2 +import numpy as np +from PIL import Image + + +def get_extractor(extractor_name): + if extractor_name is None: + return None + if extractor_name not in EXTRACTORS: + raise ValueError(f"Extractor {extractor_name} is not supported.") + return EXTRACTORS[extractor_name] + + +def canny_extractor(image: Image.Image, threshold1=None, threshold2=None) -> Image.Image: + image = np.array(image) + gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) + v = np.median(gray) + + sigma = 0.33 + threshold1 = threshold1 or int(max(0, (1.0 - sigma) * v)) + threshold2 = threshold2 or int(min(255, (1.0 + sigma) * v)) + + edges = cv2.Canny(gray, threshold1, threshold2) + edges = Image.fromarray(edges).convert("RGB") + return edges + + +def depth_extractor(image: Image.Image): + raise NotImplementedError("Depth extractor is not implemented yet.") + + +def pose_extractor(image: Image.Image): + raise NotImplementedError("Pose extractor is not implemented yet.") + + +EXTRACTORS = { + "canny": canny_extractor, +} diff --git a/ControlNeXt-SDXL-Training/utils/tools.py b/ControlNeXt-SDXL-Training/utils/tools.py new file mode 100644 index 0000000..8b44d83 --- /dev/null +++ b/ControlNeXt-SDXL-Training/utils/tools.py @@ -0,0 +1,151 @@ +import os +import gc +import torch +from torch import nn +from diffusers import UniPCMultistepScheduler, AutoencoderKL +from safetensors.torch import load_file +from pipeline.pipeline_controlnext import StableDiffusionXLControlNeXtPipeline +from models.unet import UNet2DConditionModel, UNET_CONFIG +from models.controlnet import ControlNetModel +from . import utils + + +def get_pipeline( + pretrained_model_name_or_path, + unet_model_name_or_path, + controlnet_model_name_or_path, + vae_model_name_or_path=None, + lora_path=None, + load_weight_increasement=False, + enable_xformers_memory_efficient_attention=False, + revision=None, + variant=None, + hf_cache_dir=None, + use_safetensors=True, + device=None, +): + pipeline_init_kwargs = {} + + if controlnet_model_name_or_path is not None: + print(f"loading controlnet from {controlnet_model_name_or_path}") + controlnet = ControlNetModel() + if controlnet_model_name_or_path is not None: + utils.load_safetensors(controlnet, controlnet_model_name_or_path) + else: + controlnet.scale = nn.Parameter(torch.tensor(0.), requires_grad=False) + controlnet.to(device, dtype=torch.float32) + pipeline_init_kwargs["controlnet"] = controlnet + + utils.log_model_info(controlnet, "controlnext") + else: + print(f"no controlnet") + + print(f"loading unet from {pretrained_model_name_or_path}") + if os.path.isfile(pretrained_model_name_or_path): + # load unet from local checkpoint + unet_sd = load_file(pretrained_model_name_or_path) if pretrained_model_name_or_path.endswith(".safetensors") else torch.load(pretrained_model_name_or_path) + unet_sd = utils.extract_unet_state_dict(unet_sd) + unet_sd = utils.convert_sdxl_unet_state_dict_to_diffusers(unet_sd) + unet = UNet2DConditionModel.from_config(UNET_CONFIG) + unet.load_state_dict(unet_sd, strict=True) + else: + from huggingface_hub import hf_hub_download + filename = "diffusion_pytorch_model" + if variant == "fp16": + filename += ".fp16" + if use_safetensors: + filename += ".safetensors" + else: + filename += ".pt" + unet_file = hf_hub_download( + repo_id=pretrained_model_name_or_path, + filename="unet" + '/' + filename, + cache_dir=hf_cache_dir, + ) + unet_sd = load_file(unet_file) if unet_file.endswith(".safetensors") else torch.load(pretrained_model_name_or_path) + unet_sd = utils.extract_unet_state_dict(unet_sd) + unet_sd = utils.convert_sdxl_unet_state_dict_to_diffusers(unet_sd) + unet = UNet2DConditionModel.from_config(UNET_CONFIG) + unet.load_state_dict(unet_sd, strict=True) + unet = unet.to(dtype=torch.float16) + utils.log_model_info(unet, "unet") + + if unet_model_name_or_path is not None: + print(f"loading controlnext unet from {unet_model_name_or_path}") + controlnext_unet_sd = load_file(unet_model_name_or_path) + controlnext_unet_sd = utils.convert_to_controlnext_unet_state_dict(controlnext_unet_sd) + unet_sd = unet.state_dict() + assert all( + k in unet_sd for k in controlnext_unet_sd), \ + f"controlnext unet state dict is not compatible with unet state dict, missing keys: {set(controlnext_unet_sd.keys()) - set(unet_sd.keys())}, extra keys: {set(unet_sd.keys()) - set(controlnext_unet_sd.keys())}" + if load_weight_increasement: + print("loading weight increasement") + for k in controlnext_unet_sd.keys(): + controlnext_unet_sd[k] = controlnext_unet_sd[k] + unet_sd[k] + unet.load_state_dict(controlnext_unet_sd, strict=False) + utils.log_model_info(controlnext_unet_sd, "controlnext unet") + + pipeline_init_kwargs["unet"] = unet + + if vae_model_name_or_path is not None: + print(f"loading vae from {vae_model_name_or_path}") + vae = AutoencoderKL.from_pretrained(vae_model_name_or_path, cache_dir=hf_cache_dir, torch_dtype=torch.float16).to(device) + pipeline_init_kwargs["vae"] = vae + + print(f"loading pipeline from {pretrained_model_name_or_path}") + if os.path.isfile(pretrained_model_name_or_path): + pipeline: StableDiffusionXLControlNeXtPipeline = StableDiffusionXLControlNeXtPipeline.from_single_file( + pretrained_model_name_or_path, + use_safetensors=pretrained_model_name_or_path.endswith(".safetensors"), + local_files_only=True, + cache_dir=hf_cache_dir, + **pipeline_init_kwargs, + ) + else: + pipeline: StableDiffusionXLControlNeXtPipeline = StableDiffusionXLControlNeXtPipeline.from_pretrained( + pretrained_model_name_or_path, + revision=revision, + variant=variant, + use_safetensors=use_safetensors, + cache_dir=hf_cache_dir, + **pipeline_init_kwargs, + ) + + pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config) + pipeline.set_progress_bar_config() + pipeline = pipeline.to(device, dtype=torch.float16) + + if lora_path is not None: + pipeline.load_lora_weights(lora_path) + if enable_xformers_memory_efficient_attention: + pipeline.enable_xformers_memory_efficient_attention() + + gc.collect() + if device.type == 'cuda' and torch.cuda.is_available(): + torch.cuda.empty_cache() + + return pipeline + + +def get_scheduler( + scheduler_name, + scheduler_config, +): + if scheduler_name == 'Euler A': + from diffusers.schedulers import EulerAncestralDiscreteScheduler + scheduler = EulerAncestralDiscreteScheduler.from_config(scheduler_config) + elif scheduler_name == 'UniPC': + from diffusers.schedulers import UniPCMultistepScheduler + scheduler = UniPCMultistepScheduler.from_config(scheduler_config) + elif scheduler_name == 'Euler': + from diffusers.schedulers import EulerDiscreteScheduler + scheduler = EulerDiscreteScheduler.from_config(scheduler_config) + elif scheduler_name == 'DDIM': + from diffusers.schedulers import DDIMScheduler + scheduler = DDIMScheduler.from_config(scheduler_config) + elif scheduler_name == 'DDPM': + from diffusers.schedulers import DDPMScheduler + scheduler = DDPMScheduler.from_config(scheduler_config) + else: + raise ValueError(f"Unknown scheduler: {scheduler_name}") + return scheduler diff --git a/ControlNeXt-SDXL-Training/utils/utils.py b/ControlNeXt-SDXL-Training/utils/utils.py new file mode 100644 index 0000000..b96a85e --- /dev/null +++ b/ControlNeXt-SDXL-Training/utils/utils.py @@ -0,0 +1,225 @@ +import math +from typing import Tuple, Union, Optional +from safetensors.torch import load_file +from transformers import PretrainedConfig + + +def count_num_parameters_of_safetensors_model(safetensors_path): + state_dict = load_file(safetensors_path) + return sum(p.numel() for p in state_dict.values()) + + +def import_model_class_from_model_name_or_path( + pretrained_model_name_or_path: str, revision: str, subfolder: str = None +): + text_encoder_config = PretrainedConfig.from_pretrained( + pretrained_model_name_or_path, revision=revision, subfolder=subfolder + ) + model_class = text_encoder_config.architectures[0] + if model_class == "CLIPTextModel": + from transformers import CLIPTextModel + return CLIPTextModel + elif model_class == "CLIPTextModelWithProjection": + from transformers import CLIPTextModelWithProjection + return CLIPTextModelWithProjection + else: + raise ValueError(f"{model_class} is not supported.") + + +def fix_clip_text_encoder_position_ids(text_encoder): + if hasattr(text_encoder.text_model.embeddings, "position_ids"): + text_encoder.text_model.embeddings.position_ids = text_encoder.text_model.embeddings.position_ids.long() + + +def load_controlnext_unet_state_dict(unet_sd, controlnext_unet_sd): + assert all( + k in unet_sd for k in controlnext_unet_sd), f"controlnext unet state dict is not compatible with unet state dict, missing keys: {set(controlnext_unet_sd.keys()) - set(unet_sd.keys())}, extra keys: {set(unet_sd.keys()) - set(controlnext_unet_sd.keys())}" + for k in controlnext_unet_sd.keys(): + unet_sd[k] = controlnext_unet_sd[k] + return unet_sd + + +def convert_to_controlnext_unet_state_dict(state_dict): + import re + pattern = re.compile(r'.*attn2.*to_out.*') + state_dict = {k: v for k, v in state_dict.items() if pattern.match(k)} + # state_dict = extract_unet_state_dict(state_dict) + if is_sdxl_state_dict(state_dict): + state_dict = convert_sdxl_unet_state_dict_to_diffusers(state_dict) + return state_dict + + +def make_unet_conversion_map(): + unet_conversion_map_layer = [] + + for i in range(3): # num_blocks is 3 in sdxl + # loop over downblocks/upblocks + for j in range(2): + # loop over resnets/attentions for downblocks + hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." + sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." + unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) + + if i < 3: + # no attention layers in down_blocks.3 + hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." + sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." + unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) + + for j in range(3): + # loop over resnets/attentions for upblocks + hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." + sd_up_res_prefix = f"output_blocks.{3*i + j}.0." + unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) + + # if i > 0: commentout for sdxl + # no attention layers in up_blocks.0 + hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." + sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." + unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) + + if i < 3: + # no downsample in down_blocks.3 + hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." + sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." + unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) + + # no upsample in up_blocks.3 + hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." + sd_upsample_prefix = f"output_blocks.{3*i + 2}.{2}." # change for sdxl + unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) + + hf_mid_atn_prefix = "mid_block.attentions.0." + sd_mid_atn_prefix = "middle_block.1." + unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) + + for j in range(2): + hf_mid_res_prefix = f"mid_block.resnets.{j}." + sd_mid_res_prefix = f"middle_block.{2*j}." + unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) + + unet_conversion_map_resnet = [ + # (stable-diffusion, HF Diffusers) + ("in_layers.0.", "norm1."), + ("in_layers.2.", "conv1."), + ("out_layers.0.", "norm2."), + ("out_layers.3.", "conv2."), + ("emb_layers.1.", "time_emb_proj."), + ("skip_connection.", "conv_shortcut."), + ] + + unet_conversion_map = [] + for sd, hf in unet_conversion_map_layer: + if "resnets" in hf: + for sd_res, hf_res in unet_conversion_map_resnet: + unet_conversion_map.append((sd + sd_res, hf + hf_res)) + else: + unet_conversion_map.append((sd, hf)) + + for j in range(2): + hf_time_embed_prefix = f"time_embedding.linear_{j+1}." + sd_time_embed_prefix = f"time_embed.{j*2}." + unet_conversion_map.append((sd_time_embed_prefix, hf_time_embed_prefix)) + + for j in range(2): + hf_label_embed_prefix = f"add_embedding.linear_{j+1}." + sd_label_embed_prefix = f"label_emb.0.{j*2}." + unet_conversion_map.append((sd_label_embed_prefix, hf_label_embed_prefix)) + + unet_conversion_map.append(("input_blocks.0.0.", "conv_in.")) + unet_conversion_map.append(("out.0.", "conv_norm_out.")) + unet_conversion_map.append(("out.2.", "conv_out.")) + + return unet_conversion_map + + +def convert_unet_state_dict(src_sd, conversion_map): + converted_sd = {} + for src_key, value in src_sd.items(): + src_key_fragments = src_key.split(".")[:-1] # remove weight/bias + while len(src_key_fragments) > 0: + src_key_prefix = ".".join(src_key_fragments) + "." + if src_key_prefix in conversion_map: + converted_prefix = conversion_map[src_key_prefix] + converted_key = converted_prefix + src_key[len(src_key_prefix):] + converted_sd[converted_key] = value + break + src_key_fragments.pop(-1) + assert len(src_key_fragments) > 0, f"key {src_key} not found in conversion map" + + return converted_sd + + +def convert_sdxl_unet_state_dict_to_diffusers(sd): + unet_conversion_map = make_unet_conversion_map() + + conversion_dict = {sd: hf for sd, hf in unet_conversion_map} + return convert_unet_state_dict(sd, conversion_dict) + + +def extract_unet_state_dict(state_dict): + unet_sd = {} + UNET_KEY_PREFIX = "model.diffusion_model." + for k, v in state_dict.items(): + if k.startswith(UNET_KEY_PREFIX): + unet_sd[k[len(UNET_KEY_PREFIX):]] = v + return unet_sd + + +def is_sdxl_state_dict(state_dict): + return any(key.startswith('input_blocks') for key in state_dict.keys()) + + +def contains_unet_keys(state_dict): + UNET_KEY_PREFIX = "model.diffusion_model." + return any(k.startswith(UNET_KEY_PREFIX) for k in state_dict.keys()) + + +def load_safetensors(model, safetensors_path, strict=True, load_weight_increasement=False): + if not load_weight_increasement: + state_dict = load_file(safetensors_path) + model.load_state_dict(state_dict, strict=strict) + else: + state_dict = load_file(safetensors_path) + pretrained_state_dict = model.state_dict() + for k in state_dict.keys(): + state_dict[k] = state_dict[k] + pretrained_state_dict[k] + model.load_state_dict(state_dict, strict=False) + + +def log_model_info(model, name): + sd = model.state_dict() if hasattr(model, "state_dict") else model + print( + f"{name}:", + f" number of parameters: {sum(p.numel() for p in sd.values())}", + f" dtype: {sd[next(iter(sd))].dtype}", + sep='\n' + ) + + +def around_reso(img_w, img_h, reso: Union[Tuple[int, int], int], divisible: Optional[int] = None, max_width=None, max_height=None) -> Tuple[int, int]: + r""" + w*h = reso*reso + w/h = img_w/img_h + => w = img_ar*h + => img_ar*h^2 = reso + => h = sqrt(reso / img_ar) + """ + reso = reso if isinstance(reso, tuple) else (reso, reso) + divisible = divisible or 1 + if img_w * img_h <= reso[0] * reso[1] and (not max_width or img_w <= max_width) and (not max_height or img_h <= max_height) and img_w % divisible == 0 and img_h % divisible == 0: + return (img_w, img_h) + img_ar = img_w / img_h + around_h = math.sqrt(reso[0]*reso[1] / img_ar) + around_w = img_ar * around_h // divisible * divisible + if max_width and around_w > max_width: + around_h = around_h * max_width // around_w + around_w = max_width + elif max_height and around_h > max_height: + around_w = around_w * max_height // around_h + around_h = max_height + around_h = min(around_h, max_height) if max_height else around_h + around_w = min(around_w, max_width) if max_width else around_w + around_h = int(around_h // divisible * divisible) + around_w = int(around_w // divisible * divisible) + return (around_w, around_h) diff --git a/ControlNeXt-SDXL/README.md b/ControlNeXt-SDXL/README.md index 9f48704..e96acda 100644 --- a/ControlNeXt-SDXL/README.md +++ b/ControlNeXt-SDXL/README.md @@ -93,11 +93,13 @@ mv pretrained/ControlAny-SDXL/* pretrained/ Run the example: ```bash -bash examples/anime_canny/script.sh +bash examples/anime_canny/run.sh ``` ## Usage +### Canny Condition + ```python python run_controlnext.py --pretrained_model_name_or_path "Lykon/AAM_XL_AnimeMix" \ --unet_model_name_or_path "pretrained/anime_canny/unet.safetensors" \ @@ -119,6 +121,8 @@ python run_controlnext.py --pretrained_model_name_or_path "Lykon/AAM_XL_AnimeMix > --lora_path : downloaded other LoRA weight \ > --validation_image : the control condition image \ +### Depth Condition + ```python python run_controlnext.py --pretrained_model_name_or_path "stabilityai/stable-diffusion-xl-base-1.0" \ --unet_model_name_or_path "pretrained/vidit_depth/unet.safetensors" \ @@ -136,7 +140,7 @@ python run_controlnext.py --pretrained_model_name_or_path "stabilityai/stable-d > --controlnet_scale : the strength of the controlnet output. For depth, we recommend 1.0 \ -## Image Processor +## Run with Image Processor We also provide a simple image processor to help you automatically convert the image to the control condition, such as canny. @@ -158,4 +162,57 @@ python run_controlnext.py --pretrained_model_name_or_path "Lykon/AAM_XL_AnimeMix > --validation_image : the image to be processed to the control condition. \ > --validation_image_processor : the processor to apply to the validation image. We support `canny` now. -# TODO +# Training + +Hardware requirement: A single GPU with at least 20GB memory. + +## Quick Start + +Clone the repository: + +```bash +git clone https://github.com/dvlab-research/ControlNeXt +cd ControlNeXt/ControlNeXt-SDXL +``` + +Install the required packages: + +```bash +pip install -r requirements.txt +pip install accelerate datasets torchvision +``` + +Run the training script: + +```bash +bash examples/anime_canny/train.sh +``` + +The output will be saved in `train/example`. + +## Usage + +```python +accelerate launch train_controlnext.py --pretrained_model_name_or_path "stabilityai/stable-diffusion-xl-base-1.0" \ +--pretrained_vae_model_name_or_path "madebyollin/sdxl-vae-fp16-fix" \ +--variant fp16 \ +--use_safetensors \ +--output_dir "train/example" \ +--logging_dir "logs" \ +--resolution 1024 \ +--gradient_checkpointing \ +--set_grads_to_none \ +--proportion_empty_prompts 0.2 \ +--controlnet_scale_factor 1.0 \ +--mixed_precision fp16 \ +--enable_xformers_memory_efficient_attention \ +--dataset_name "Nahrawy/VIDIT-Depth-ControlNet" \ +--image_column "image" \ +--conditioning_image_column "depth_map" \ +--caption_column "caption" \ +--validation_prompt "a stone tower on a rocky island" \ +--validation_image "examples/vidit_depth/condition_0.png" +``` + +> --pretrained_model_name_or_path : pretrained base model \ +> --controlnet_scale_factor : the strength of the controlnet output. For depth, we recommend 1.0, and for canny, we recommend 0.35 \ diff --git a/ControlNeXt-SDXL/examples/anime_canny/script.sh b/ControlNeXt-SDXL/examples/anime_canny/run.sh similarity index 100% rename from ControlNeXt-SDXL/examples/anime_canny/script.sh rename to ControlNeXt-SDXL/examples/anime_canny/run.sh diff --git a/ControlNeXt-SDXL/examples/anime_canny/script_pp.sh b/ControlNeXt-SDXL/examples/anime_canny/run_with_pp.sh similarity index 100% rename from ControlNeXt-SDXL/examples/anime_canny/script_pp.sh rename to ControlNeXt-SDXL/examples/anime_canny/run_with_pp.sh diff --git a/ControlNeXt-SDXL/examples/vidit_depth/script.sh b/ControlNeXt-SDXL/examples/vidit_depth/run.sh similarity index 100% rename from ControlNeXt-SDXL/examples/vidit_depth/script.sh rename to ControlNeXt-SDXL/examples/vidit_depth/run.sh diff --git a/ControlNeXt-SDXL/models/controlnet.py b/ControlNeXt-SDXL/models/controlnet.py index e97bfba..9a505a9 100644 --- a/ControlNeXt-SDXL/models/controlnet.py +++ b/ControlNeXt-SDXL/models/controlnet.py @@ -338,36 +338,17 @@ class ControlNetModel(ModelMixin, ConfigMixin): @register_to_config def __init__( self, - sample_size: Optional[int] = None, - in_channels: int = 3, - down_block_types: Tuple[str] = ( - "Block2D", - "Block2D", - "Block2D", - "Block2D", - ), - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - addition_time_embed_dim: int = 256, - projection_class_embeddings_input_dim: int = 768, - layers_per_block: Union[int, Tuple[int]] = 2, - cross_attention_dim: Union[int, Tuple[int]] = 1024, - transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple]] = 1, - num_attention_heads: Union[int, Tuple[int]] = (5, 10, 10, 20), - num_frames: int = 25, - conditioning_channels: int = 3, - conditioning_embedding_out_channels: Optional[Tuple[int, ...]] = (16, 32, 96, 256), + in_channels: List[int] = [128, 128], + out_channels: List[int] = [128, 256], + groups: List[int] = [4, 8], + time_embed_dim: int = 256, + final_out_channels: int = 320, ): super().__init__() self.time_proj = Timesteps(128, True, downscale_freq_shift=0) - timestep_input_dim = block_out_channels[0] - time_embed_dim = 256 self.time_embedding = TimestepEmbedding(128, time_embed_dim) - in_channels = [128, 128] - out_channels = [128, 256] - groups = [4, 8] - self.embedding = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1), nn.GroupNorm(2, 64), @@ -424,20 +405,11 @@ def __init__( self.mid_convs.append( nn.Conv2d( in_channels=out_channels[-1], - out_channels=320, + out_channels=final_out_channels, kernel_size=1, stride=1, )) - - # self.scale_linear = nn.Linear(time_embed_dim, time_embed_dim) - # self.time_out_scale = nn.Linear(time_embed_dim, 1, bias=False) - # nn.init.zeros_(self.time_out_scale.weight) - # self.out = nn.Conv2d(4, 4, kernel_size=1, stride=1, padding=0) - # for p in self.out.parameters(): - # nn.init.zeros_(p) - # self.scale = nn.Parameter(torch.tensor(1.)) - self.scale = 1 # nn.Parameter(torch.tensor(1.)) - # self.scale = nn.Parameter(torch.tensor(0.8766)) + self.scale = 1.0 # nn.Parameter(torch.tensor(1.)) def _set_gradient_checkpointing(self, module, value=False): if hasattr(module, "gradient_checkpointing"): @@ -477,8 +449,6 @@ def forward( self, sample: torch.FloatTensor, timestep: Union[torch.Tensor, float, int], - *args, - **kwargs ) -> Union[ControlNetOutput, Tuple]: timesteps = timestep @@ -497,29 +467,22 @@ def forward( # broadcast to batch dimension in a way that's compatible with ONNX/Core ML batch_size = sample.shape[0] timesteps = timesteps.expand(batch_size) - t_emb = self.time_proj(timesteps) - # `Timesteps` does not contain any weights and will always return f32 tensors # but time_embedding might actually be running in fp16. so we need to cast here. # there might be better ways to encapsulate this. t_emb = t_emb.to(dtype=sample.dtype) - emb_batch = self.time_embedding(t_emb) # Repeat the embeddings num_video_frames times # emb: [batch, channels] -> [batch * frames, channels] emb = emb_batch - sample = self.embedding(sample) - for res, downsample in zip(self.down_res, self.down_sample): sample = res(sample, emb) sample = downsample(sample, emb) - sample = self.mid_convs[0](sample) + sample sample = self.mid_convs[1](sample) - return { 'out': sample, 'scale': self.scale, diff --git a/ControlNeXt-SDXL/models/unet.py b/ControlNeXt-SDXL/models/unet.py index 6d2314c..fcd5a7a 100644 --- a/ControlNeXt-SDXL/models/unet.py +++ b/ControlNeXt-SDXL/models/unet.py @@ -1252,6 +1252,7 @@ def forward( scale_lora_layers(self, lora_scale) is_controlnet = mid_block_additional_residual is not None and down_block_additional_residuals is not None + is_controlnext = controls is not None # using new arg down_intrablock_additional_residuals for T2I-Adapters, to distinguish from controlnets is_adapter = down_intrablock_additional_residuals is not None # maintain backward compatibility for legacy usage, where @@ -1269,7 +1270,9 @@ def forward( down_intrablock_additional_residuals = down_block_additional_residuals is_adapter = True - if controls is not None: + down_block_res_samples = (sample,) + + if is_controlnext: scale = controls['scale'] controls = controls['out'].to(sample) mean_latents, std_latents = torch.mean(sample, dim=(1, 2, 3), keepdim=True), torch.std(sample, dim=(1, 2, 3), keepdim=True) @@ -1278,7 +1281,6 @@ def forward( controls = nn.functional.adaptive_avg_pool2d(controls, sample.shape[-2:]) sample = sample + controls * scale - down_block_res_samples = (sample,) for i, downsample_block in enumerate(self.down_blocks): if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: # For t2i-adapter CrossAttnDownBlock2D diff --git a/ControlNeXt-SDXL/pipeline/pipeline_controlnext.py b/ControlNeXt-SDXL/pipeline/pipeline_controlnext.py index 5d7567d..fa6d2cc 100644 --- a/ControlNeXt-SDXL/pipeline/pipeline_controlnext.py +++ b/ControlNeXt-SDXL/pipeline/pipeline_controlnext.py @@ -1261,7 +1261,6 @@ def __call__( controls = self.controlnet( controlnet_image, t, - return_dict=False ) # This makes the effect of the controlnext much more stronger diff --git a/ControlNeXt-SDXL/run_controlnext.py b/ControlNeXt-SDXL/run_controlnext.py index 4713e4d..04b5026 100644 --- a/ControlNeXt-SDXL/run_controlnext.py +++ b/ControlNeXt-SDXL/run_controlnext.py @@ -22,6 +22,7 @@ def log_validation( revision=args.revision, variant=args.variant, hf_cache_dir=args.hf_cache_dir, + use_safetensors=args.use_safetensors, device=device, ) @@ -161,6 +162,11 @@ def parse_args(input_args=None): default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) + parser.add_argument( + "--use_safetensors", + action="store_true", + help="Whether or not to use safetensors to load the pipeline.", + ) parser.add_argument( "--output_dir", type=str, diff --git a/ControlNeXt-SDXL/utils/tools.py b/ControlNeXt-SDXL/utils/tools.py index f42a337..8b44d83 100644 --- a/ControlNeXt-SDXL/utils/tools.py +++ b/ControlNeXt-SDXL/utils/tools.py @@ -1,4 +1,5 @@ import os +import gc import torch from torch import nn from diffusers import UniPCMultistepScheduler, AutoencoderKL @@ -20,6 +21,7 @@ def get_pipeline( revision=None, variant=None, hf_cache_dir=None, + use_safetensors=True, device=None, ): pipeline_init_kwargs = {} @@ -39,25 +41,33 @@ def get_pipeline( print(f"no controlnet") print(f"loading unet from {pretrained_model_name_or_path}") - if os.path.isfile(pretrained_model_name_or_path) and pretrained_model_name_or_path.endswith(".safetensors"): - # load unet from safetensors checkpoint - unet_sd = load_file(pretrained_model_name_or_path) + if os.path.isfile(pretrained_model_name_or_path): + # load unet from local checkpoint + unet_sd = load_file(pretrained_model_name_or_path) if pretrained_model_name_or_path.endswith(".safetensors") else torch.load(pretrained_model_name_or_path) unet_sd = utils.extract_unet_state_dict(unet_sd) unet_sd = utils.convert_sdxl_unet_state_dict_to_diffusers(unet_sd) unet = UNet2DConditionModel.from_config(UNET_CONFIG) unet.load_state_dict(unet_sd, strict=True) - if variant == "fp16": - unet = unet.to(dtype=torch.float16) else: - unet = UNet2DConditionModel.from_pretrained( - pretrained_model_name_or_path, - revision=revision, - variant=variant, - subfolder="unet", - use_safetensors=True, + from huggingface_hub import hf_hub_download + filename = "diffusion_pytorch_model" + if variant == "fp16": + filename += ".fp16" + if use_safetensors: + filename += ".safetensors" + else: + filename += ".pt" + unet_file = hf_hub_download( + repo_id=pretrained_model_name_or_path, + filename="unet" + '/' + filename, cache_dir=hf_cache_dir, - torch_dtype=torch.float16 if variant == "fp16" else None, ) + unet_sd = load_file(unet_file) if unet_file.endswith(".safetensors") else torch.load(pretrained_model_name_or_path) + unet_sd = utils.extract_unet_state_dict(unet_sd) + unet_sd = utils.convert_sdxl_unet_state_dict_to_diffusers(unet_sd) + unet = UNet2DConditionModel.from_config(UNET_CONFIG) + unet.load_state_dict(unet_sd, strict=True) + unet = unet.to(dtype=torch.float16) utils.log_model_info(unet, "unet") if unet_model_name_or_path is not None: @@ -69,6 +79,7 @@ def get_pipeline( k in unet_sd for k in controlnext_unet_sd), \ f"controlnext unet state dict is not compatible with unet state dict, missing keys: {set(controlnext_unet_sd.keys()) - set(unet_sd.keys())}, extra keys: {set(unet_sd.keys()) - set(controlnext_unet_sd.keys())}" if load_weight_increasement: + print("loading weight increasement") for k in controlnext_unet_sd.keys(): controlnext_unet_sd[k] = controlnext_unet_sd[k] + unet_sd[k] unet.load_state_dict(controlnext_unet_sd, strict=False) @@ -94,8 +105,8 @@ def get_pipeline( pipeline: StableDiffusionXLControlNeXtPipeline = StableDiffusionXLControlNeXtPipeline.from_pretrained( pretrained_model_name_or_path, revision=revision, - use_safetensors=True, variant=variant, + use_safetensors=use_safetensors, cache_dir=hf_cache_dir, **pipeline_init_kwargs, ) @@ -109,4 +120,32 @@ def get_pipeline( if enable_xformers_memory_efficient_attention: pipeline.enable_xformers_memory_efficient_attention() + gc.collect() + if device.type == 'cuda' and torch.cuda.is_available(): + torch.cuda.empty_cache() + return pipeline + + +def get_scheduler( + scheduler_name, + scheduler_config, +): + if scheduler_name == 'Euler A': + from diffusers.schedulers import EulerAncestralDiscreteScheduler + scheduler = EulerAncestralDiscreteScheduler.from_config(scheduler_config) + elif scheduler_name == 'UniPC': + from diffusers.schedulers import UniPCMultistepScheduler + scheduler = UniPCMultistepScheduler.from_config(scheduler_config) + elif scheduler_name == 'Euler': + from diffusers.schedulers import EulerDiscreteScheduler + scheduler = EulerDiscreteScheduler.from_config(scheduler_config) + elif scheduler_name == 'DDIM': + from diffusers.schedulers import DDIMScheduler + scheduler = DDIMScheduler.from_config(scheduler_config) + elif scheduler_name == 'DDPM': + from diffusers.schedulers import DDPMScheduler + scheduler = DDPMScheduler.from_config(scheduler_config) + else: + raise ValueError(f"Unknown scheduler: {scheduler_name}") + return scheduler From 2617d87d55c3c47b70a71373d55da6d1a0e9ca32 Mon Sep 17 00:00:00 2001 From: Pbihao <1435343052@qq.com> Date: Tue, 20 Aug 2024 11:20:51 +0800 Subject: [PATCH 4/4] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 04a8a94..d171ddd 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,8 @@ We spent a lot of time to find these. Now share with all of you. May these will - **ControlNeXt-SDXL** [ [Link](ControlNeXt-SDXL) ] : Controllable image generation. Our model is built upon [Stable Diffusion XL ](stabilityai/stable-diffusion-xl-base-1.0). Fewer trainable parameters, faster convergence, improved efficiency, and can be integrated with LoRA. +- **ControlNeXt-SDXL-Training** [ [Link](ControlNeXt-SDXL-Training) ] : The training scripts for our `ControlNeXt-SDXL` [ [Link](ControlNeXt-SDXL) ]. + - **ControlNeXt-SVD-v2** [ [Link](ControlNeXt-SVD-v2) ] : Generate the video controlled by the sequence of human poses. In the v2 version, we implement several improvements: a higher-quality collected training dataset, larger training and inference batch frames, higher generation resolution, enhanced human-related video generation through continual training, and pose alignment for inference to improve overall performance. - **ControlNeXt-SVD-v2-Training** [ [Link](ControlNeXt-SVD-v2-Training) ] : The training scripts for our `ControlNeXt-SVD-v2` [ [Link](ControlNeXt-SVD-v2) ].