We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
因为我看到 AI-Toolkit。配置中有一个标志,可以将其切换为训练 Schnell 而不是 Dev。 如果支持,请问该如何设置? 以下是我的训练参数设置 model_train_type = "flux-lora" pretrained_model_name_or_path = "/lora-scripts/sd-models/flux1-schnell.safetensors" ae = "/lora-scripts/sd-models/ae.safetensors" clip_l = "/lora-scripts/sd-models/clip_l.safetensors" t5xxl = "/lora-scripts/sd-models/t5xxl_fp16.safetensors" timestep_sampling = "sigmoid" sigmoid_scale = 1 model_prediction_type = "raw" discrete_flow_shift = 1 loss_type = "l2" guidance_scale = 1 train_data_dir = "/lora-scripts/train/IMXL_1536X1024" prior_loss_weight = 0 resolution = "1536,1024" enable_bucket = false min_bucket_reso = 256 max_bucket_reso = 2048 bucket_reso_steps = 64 bucket_no_upscale = false output_name = "IM_SCHNELL_V1" output_dir = "./output" save_model_as = "safetensors" save_precision = "bf16" save_every_n_epochs = 1 max_train_epochs = 15 train_batch_size = 2 gradient_checkpointing = true gradient_accumulation_steps = 1 network_train_unet_only = false network_train_text_encoder_only = false learning_rate = 0.0001 unet_lr = 0.0001 text_encoder_lr = 0.00001 lr_scheduler = "cosine_with_restarts" lr_warmup_steps = 0 lr_scheduler_num_cycles = 1 optimizer_type = "AdamW8bit" network_module = "networks.lora_flux" network_dim = 64 network_alpha = 64
The text was updated successfully, but these errors were encountered:
No branches or pull requests
因为我看到 AI-Toolkit。配置中有一个标志,可以将其切换为训练 Schnell 而不是 Dev。
如果支持,请问该如何设置?
以下是我的训练参数设置
model_train_type = "flux-lora"
pretrained_model_name_or_path = "/lora-scripts/sd-models/flux1-schnell.safetensors"
ae = "/lora-scripts/sd-models/ae.safetensors"
clip_l = "/lora-scripts/sd-models/clip_l.safetensors"
t5xxl = "/lora-scripts/sd-models/t5xxl_fp16.safetensors"
timestep_sampling = "sigmoid"
sigmoid_scale = 1
model_prediction_type = "raw"
discrete_flow_shift = 1
loss_type = "l2"
guidance_scale = 1
train_data_dir = "/lora-scripts/train/IMXL_1536X1024"
prior_loss_weight = 0
resolution = "1536,1024"
enable_bucket = false
min_bucket_reso = 256
max_bucket_reso = 2048
bucket_reso_steps = 64
bucket_no_upscale = false
output_name = "IM_SCHNELL_V1"
output_dir = "./output"
save_model_as = "safetensors"
save_precision = "bf16"
save_every_n_epochs = 1
max_train_epochs = 15
train_batch_size = 2
gradient_checkpointing = true
gradient_accumulation_steps = 1
network_train_unet_only = false
network_train_text_encoder_only = false
learning_rate = 0.0001
unet_lr = 0.0001
text_encoder_lr = 0.00001
lr_scheduler = "cosine_with_restarts"
lr_warmup_steps = 0
lr_scheduler_num_cycles = 1
optimizer_type = "AdamW8bit"
network_module = "networks.lora_flux"
network_dim = 64
network_alpha = 64
The text was updated successfully, but these errors were encountered: