diff --git a/.github/workflows/typos.yaml b/.github/workflows/typos.yaml index 37f194d80..bd4ef334e 100644 --- a/.github/workflows/typos.yaml +++ b/.github/workflows/typos.yaml @@ -18,4 +18,4 @@ jobs: - uses: actions/checkout@v4 - name: typos-action - uses: crate-ci/typos@v1.16.23 + uses: crate-ci/typos@v1.16.26 diff --git a/.release b/.release index cfd03f96c..adb070518 100644 --- a/.release +++ b/.release @@ -1 +1 @@ -v22.4.0 +v22.4.1 diff --git a/README.md b/README.md index 91f0ba839..dc6925941 100644 --- a/README.md +++ b/README.md @@ -651,6 +651,9 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b ## Change History +* 2024/01/02 (v22.4.1) +- Minor bug fixed and enhancements. + * 2023/12/28 (v22.4.0) - Fixed to work `tools/convert_diffusers20_original_sd.py`. Thanks to Disty0! PR [#1016](https://github.com/kohya-ss/sd-scripts/pull/1016) - The issues in multi-GPU training are fixed. Thanks to Isotr0py! PR [#989](https://github.com/kohya-ss/sd-scripts/pull/989) and [#1000](https://github.com/kohya-ss/sd-scripts/pull/1000) diff --git a/finetune/make_captions.py b/finetune/make_captions.py index f1b83b151..071669092 100644 --- a/finetune/make_captions.py +++ b/finetune/make_captions.py @@ -13,7 +13,7 @@ from torchvision import transforms from torchvision.transforms.functional import InterpolationMode sys.path.append(os.path.join(os.path.dirname(__file__), '..')) # sys.path.append(os.path.dirname(__file__)) -from blip.blip import blip_decoder +from blip.blip import blip_decoder, is_url import library.train_util as train_util DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") diff --git a/localizations/zh-TW.json b/localizations/zh-TW.json new file mode 100644 index 000000000..964655ecb --- /dev/null +++ b/localizations/zh-TW.json @@ -0,0 +1,501 @@ + + { + "-Need to add resources here": "-需要在此添加資料", + "(Experimental, Optional) Since the latent is close to a normal distribution, it may be a good idea to specify a value around 1/10 the noise offset.": " (選填,實驗性功能) 由於潛空間接近常態分布,或許指定一個噪聲偏移約 1/10 的數值是個不錯的作法。", + "(Optional) Add training comment to be included in metadata": " (選填) 在訓練的後設資料加入註解。", + "(Optional) Enforce number of epoch": " (選填) 強制指定一個週期 (Epoch) 數量", + "(Optional) Enforce number of steps": " (選填) 強制指定一個總步數數量", + "(Optional) Save only the specified number of models (old models will be deleted)": " (選填) 僅儲存指定數量的模型 (舊有模型將被刪除) ", + "(Optional) Save only the specified number of states (old models will be deleted)": " (選填) 僅儲存指定數量的訓練資料 (舊有訓練資料將被刪除) ", + "(Optional) Stable Diffusion base model": " (選填) 穩定擴散基礎模型", + "(Optional) Stable Diffusion model": " (選填) 穩定擴散模型", + "(Optional) The model is saved every specified steps": " (選填) 模型會在指定的間隔步數後儲存", + "(Optional)": " (選填) ", + "Optional": "選填", + "Optional. Se": "選填", + "(Optional) Directory containing the regularisation images": " (選填) 含有正規化圖片的資料夾", + "Eg: asd": "例如:asd", + "Eg: person": "例如:person", + "Folder containing the concepts folders to balance...": "含有要平衡的概念資料夾的資料夾路徑...", + "Balance dataset": "平衡資料集", + "Clamp Quantile": "夾取分位數", + "Minimum difference": "最小化差異", + "network dim for linear layer in fixed mode": "固定模式下線性層的網路維度", + "network dim for conv layer in fixed mode": "固定模式下卷積層的網路維度", + "Sparsity for sparse bias": "稀疏偏差的稀疏度", + "path for the file to save...": "儲存檔案的路徑...", + "Verification output": "驗證輸出", + "Verification error": "驗證錯誤", + "New Rank": "新維度 (Network Rank)", + "New Conv Rank": "新卷積維度 (Conv Rank)", + "Directory containing the images to group": "含有要分組的圖片的資料夾路徑", + "Directory where the grouped images will be stored": "要儲存分組圖片的資料夾路徑", + "Group images": "分組圖片", + "Group Images": "分組圖片", + "Captioning": "標記文字", + "Caption images": "標記圖片", + "(Optional) model id for GIT in Hugging Face": " (選填) Hugging Face 中 GIT 的模型 ID", + "Undesired tags": "不需要的標籤", + "(Optional) Separate `undesired_tags` with comma `(,)` if you want to remove multiple tags, e.g. `1girl,solo,smile`.": " (選填) 如果要移除多個標籤,請使用逗號 `(,)` 分隔 `undesired_tags`,例如:`1girl,solo,smile`。", + "Prefix to add to WD14 caption": "要加入到 WD14 標記文字的前綴", + "Postfix to add to WD14 caption": "要加入到 WD14 標記文字的後綴", + "This option appends the tags to the existing tags, instead of replacing them.": "此選項將標籤附加到現有標籤,而不是取代它們。", + "Append TAGs": "附加標籤", + "Replace underscores in filenames with spaces": "將檔案名稱中的底線替換為空格", + "Tag subfolders images as well": "標記子資料夾中的圖片", + "Recursive": "遞迴標記", + "Debug while tagging, it will print your image file with general tags and character tags.": "標記時除錯,它會列印出你的圖片檔案與一般標籤和角色標籤。", + "Verbose logging": "詳細記錄", + "Show frequency of tags for images.": "顯示圖片的標籤頻率。", + "Show tags frequency": "顯示標籤頻率", + "Model": "模型", + "Usefull to force model re download when switching to onnx": "切換到 onnx 時,強制重新下載模型", + "Force model re-download": "強制重新下載模型", + "General threshold": "一般閾值", + "Adjust `general_threshold` for pruning tags (less tags, less flexible)": "調整 `general_threshold` 以修剪標籤 (標籤越少,彈性越小)", + "Character threshold": "角色閾值", + "useful if you want to train with character": "如果你想要使用角色訓練,這很有用", + "Max dataloader workers": "最大資料載入工作數", + "Comma separated list of tags": "逗號分隔的標籤列表", + "Load 💾": "讀取 💾", + "Import 📄": "匯入 📄", + "Options": "選項", + "Caption Separator": "標記文字分隔符號", + "VAE batch size": "VAE 批次大小", + "Max grad norm": "最大梯度規範 (Max grad norm)", + "Learning rate Unet": "Unet 學習率", + "Set to 0 to not train the Unet": "設為 0 以不訓練 Unet", + "Learning rate TE": "文字編碼器學習率", + "Set to 0 to not train the Text Encoder": "設為 0 以不訓練文字編碼器", + "Tools": "工具", + "Convert to LCM": "轉換模型到 LCM", + "This utility convert a model to an LCM model.": "此工具將模型轉換為 LCM 模型。", + "Stable Diffusion model to convert to LCM": "要轉換為 LCM 的穩定擴散模型", + "Name of the new LCM model": "新 LCM 模型的名稱", + "Path to the LCM file to create": "要建立的 LCM 檔案的路徑", + "type the configuration file path or use the 'Open' button above to select it...": "輸入設定檔案的路徑,或使用上方的「Open 📂」按鈕來選擇它...", + "Adjusts the scale of the rank dropout to maintain the average dropout rate, ensuring more consistent regularization across different layers.": "調整維度 (Rank) 捨棄的比例,以維持平均捨棄率,確保在不同層之間更一致的正規化。", + "Rank Dropout Scale": "維度 (Rank) 捨棄比例", + "Selects trainable layers in a network, but trains normalization layers identically across methods as they lack matrix decomposition.": "選擇網路中可訓練的層,但由於缺乏矩陣分解,因此以相同的方式訓練正規化層。", + "Train Norm": "訓練正規化", + "LyCORIS Preset": "LyCORIS 預設範本", + "Presets": "預設範本", + "Efficiently decompose tensor shapes, resulting in a sequence of convolution layers with varying dimensions and Hadamard product implementation through multiplication of two distinct tensors.": "有效地分解張量形狀,從而產生一系列具有不同維度的卷積層,並通過兩個不同張量的乘法實現 Hadamard 乘積。", + "Use Tucker decomposition": "使用 Tucker 分解", + "Train an additional scalar in front of the weight difference, use a different weight initialization strategy.": "在權重差異前訓練額外的標量,使用不同的權重初始化策略。", + "Use Scalar": "使用標量", + "applies an additional scaling factor to the oft_blocks, allowing for further adjustment of their impact on the model's transformations.": "對 oft_blocks 應用額外的縮放因子,從而進一步調整其對模型轉換的影響。", + "Rescaled OFT": "重新縮放 OFT", + "Constrain OFT": "限制 OFT", + "Limits the norm of the oft_blocks, ensuring that their magnitude does not exceed a specified threshold, thus controlling the extent of the transformation applied.": "限制 oft_blocks 的規範,確保其大小不超過指定的閾值,從而控制應用的轉換程度。", + "LoKr factor": "LoKr 因子", + "Set if we change the information going into the system (True) or the information coming out of it (False).": "選用後會改變進入系統的訓練資料,若不選則會改變輸出系統的訓練資料。", + "iA3 train on input": "iA3 訓練輸入", + "Controls whether both input and output dimensions of the layer's weights are decomposed into smaller matrices for reparameterization.": "控制層權重的輸入和輸出維度是否被分解為較小的矩陣以進行重新參數化。", + "LoKr decompose both": "LoKr 同時分解", + "Strength of the LCM": "LCM 的強度", + "folder where the training configuration files will be saved": "訓練設定檔案將會被儲存的資料夾路徑", + "folder where the training images are located": "訓練圖片的資料夾路徑", + "folder where the model will be saved": "模型將會被儲存的資料夾路徑", + "Model type": "模型類型", + "Extract LCM": "提取 LCM", + "Verfiy LoRA": "驗證 LoRA", + "Path to an existing LoRA network weights to resume training from": "要從中繼續訓練的現有 LoRA 網路權重的路徑", + "Seed": "種子", + "(Optional) eg:1234": " (選填) 例如:1234", + "(Optional) eg: \"milestones=[1,10,30,50]\" \"gamma=0.1\"": " (選填) 例如: \"milestones=[1,10,30,50]\" \"gamma=0.1\"", + "(Optional) eg: relative_step=True scale_parameter=True warmup_init=True": " (選填) 例如:relative_step=True scale_parameter=True warmup_init=True", + "(Optional) For Cosine with restart and polynomial only": " (選填) 只適用於餘弦函數並使用重啟 (cosine_with_restart) 和多項式 (polynomial)", + "Network Rank (Dimension)": "網路維度 (Rank)", + "Network Alpha": "網路 Alpha", + "alpha for LoRA weight scaling": "LoRA 權重縮放的 Alpha 值", + "Convolution Rank (Dimension)": "卷積維度 (Rank)", + "Convolution Alpha": "卷積 Alpha", + "Max Norm Regularization is a technique to stabilize network training by limiting the norm of network weights. It may be effective in suppressing overfitting of LoRA and improving stability when used with other LoRAs. See PR #545 on kohya_ss/sd_scripts repo for details. Recommended setting: 1. Higher is weaker, lower is stronger.": "最大規範正規化是一種穩定網路訓練的技術,通過限制網路權重的規範來實現。當與其他 LoRA 一起使用時,它可能會有效地抑制 LoRA 的過度擬合並提高穩定性。詳細資料請見 kohya_ss/sd_scripts Github 上的 PR#545。建議設置:1.0 越高越弱,越低越強。", + "Is a normal probability dropout at the neuron level. In the case of LoRA, it is applied to the output of down. Recommended range 0.1 to 0.5": "是神經元級的正常概率捨棄。在 LoRA 的情況下,它被應用於 Down Sampler 的輸出。建議範圍 0.1 到 0.5", + "can specify `rank_dropout` to dropout each rank with specified probability. Recommended range 0.1 to 0.3": "可以指定 `rank_dropout` 以指定的概率捨棄每個維度。建議範圍 0.1 到 0.3", + "can specify `module_dropout` to dropout each rank with specified probability. Recommended range 0.1 to 0.3": "可以指定 `module_dropout` 以指定的概率捨棄每個維度。建議範圍 0.1 到 0.3", + "Folder where the training folders containing the images are located": "訓練資料夾的資料夾路徑,包含圖片", + "(Optional) Folder where where the regularization folders containing the images are located": " (選填) 正規化資料夾的資料夾路徑,包含圖片", + "Folder to output trained model": "輸出訓練模型的資料夾路徑", + "Optional: enable logging and output TensorBoard log to this folder": "選填:啟用記錄並將 TensorBoard 記錄輸出到此資料夾", + "Pretrained model name or path": "預訓練模型名稱或路徑", + "enter the path to custom model or name of pretrained model": "輸入自訂模型的路徑或預訓練模型的名稱", + "(Name of the model to output)": " (輸出的模型名稱)", + "LoRA type": "LoRA 類型", + "(Optional) path to checkpoint of vae to replace for training": " (選填) 要替換訓練的 VAE checkpoint 的路徑", + "(Optional) Use to provide additional parameters not handled by the GUI. Eg: --some_parameters \"value\"": " (選填) 用於提供 GUI 未處理的額外參數。例如:--some_parameters \"value\"", + "Automates the processing of noise, allowing for faster model fitting, as well as balancing out color issues": "自動處理噪聲,可以更快地擬合模型,同時平衡顏色問題", + "Debiased Estimation loss": "偏差估算損失 (Debiased Estimation loss)", + "(Optional) Override number of epoch. Default: 8": " (選填) 覆蓋週期 (Epoch) 數量。預設:8", + "Weights": "權重", + "Down LR weights": "Down LR 權重", + "Mid LR weights": "Mid LR 權重", + "Up LR weights": "Up LR 權重", + "Blocks LR zero threshold": "區塊 LR 零閾值", + "(Optional) eg: 0,0,0,0,0,0,1,1,1,1,1,1": " (選填) 例如:0,0,0,0,0,0,1,1,1,1,1,1", + "(Optional) eg: 0.5": " (選填) 例如:0.5", + "(Optional) eg: 0.1": " (選填) 例如:0.1", + "Specify the learning rate weight of the down blocks of U-Net.": "指定 U-Net 下區塊的學習率權重。", + "Specify the learning rate weight of the mid blocks of U-Net.": "指定 U-Net 中區塊的學習率權重。", + "Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.": "指定 U-Net 上區塊的學習率權重。與 down_lr_weight 相同。", + "If the weight is not more than this value, the LoRA module is not created. The default is 0.": "如果權重不超過此值,則不會創建 LoRA 模組。預設為 0。", + "Blocks": "區塊", + "Block dims": "區塊維度", + "(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2": " (選填) 例如:2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2", + "Specify the dim (rank) of each block. Specify 25 numbers.": "指定每個區塊的維度 (Rank)。指定 25 個數字。", + "Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.": "指定每個區塊的 Alpha。與區塊維度一樣,指定 25 個數字。如果省略,則使用網路 Alpha 的值。", + "Extend LoRA to Conv2d 3x3 and specify the dim (rank) of each block. Specify 25 numbers.": "將 LoRA 擴展到 Conv2d 3x3,並指定每個區塊的維度 (Rank)。指定 25 個數字。", + "Specify the alpha of each block when expanding LoRA to Conv2d 3x3. Specify 25 numbers. If omitted, the value of conv_alpha is used.": "將 LoRA 擴展到 Conv2d 3x3 時,指定每個區塊的 Alpha。指定 25 個數字。如果省略,則使用卷積 Alpha 的值。", + "Weighted captions": "加權標記文字", + "About SDXL training": "關於 SDXL 訓練", + "Adaptive noise scale": "自適應噪聲比例", + "Additional parameters": "額外參數", + "Advanced options": "進階選項", + "Advanced parameters": "進階參數", + "Advanced": "進階", + "ashleykleynhans runpod docker builds": "ashleykleynhans runpod docker 建構", + "Automatically determine the dim(rank) from the weight file.": "從指定的權重檔案自動決定 dim(rank)。", + "Autosave": "自動儲存", + "Basic Captioning": "基本標記", + "Basic": "基本", + "Batch size": "批次大小", + "BLIP Captioning": "BLIP 標記", + "Bucket resolution steps": "分桶解析度間隔", + "Built with Gradio": "使用 Gradio 建構", + "Cache latents to disk": "暫存潛空間資料到硬碟", + "Cache latents": "暫存潛空間資料", + "Caption file extension": "標記檔案副檔名", + "Caption Extension": "標記檔案副檔名", + "(Optional) Extension for caption files. default: .caption": " (選填) 標記檔案的副檔名。預設:.caption", + "Caption text": "標記文字", + "caption": "標記", + "Change History": "變更記錄", + "Class prompt": "類 (Class) 提詞", + "Color augmentation": "顏色增強", + "Configuration file": "設定檔", + "constant_with_warmup": "常數並使用預熱 (constant_with_warmup)", + "constant": "常數 (constant)", + "Conv Dimension (Rank)": "卷積維度 (Rank)", + "Conv Dimension": "卷積維度", + "Convert model": "轉換模型", + "Copy info to Folders Tab": "複製資訊到資料夾頁籤", + "cosine_with_restarts": "餘弦函數並使用重啟", + "cosine": "餘弦函數 (cosine)", + "CrossAttention": "交叉注意力", + "DANGER!!! -- Insecure folder renaming -- DANGER!!!": "危險!!! -- 不安全的資料夾重新命名 -- 危險!!!", + "Dataset folder": "資料集資料夾", + "Dataset preparation": "資料集準備", + "Dataset Preparation": "資料集準備", + "Dataset repeats": "資料集重複數", + "Desired LoRA rank": "希望 LoRA 的維度 (Rank)", + "Destination training directory": "訓練結果資料夾", + "Device": "裝置", + "DIM from weights": "從權重讀取 DIM", + "Directory containing the images to caption": "含有需標記的圖片資料夾", + "Directory containing the training images": "訓練的圖片資料夾", + "Directory where formatted training and regularisation folders will be placed": "訓練與正規化資料夾將會被取代", + "Disable CP decomposition": "停用 CP 分解法", + "Do not copy other files in the input folder to the output folder": "不要將輸入資料夾中的其他檔案,複製到輸出資料夾", + "Do not copy other files": "不複製其他檔案", + "Don't upscale bucket resolution": "不要放大分桶解析度", + "Dreambooth/LoRA Dataset balancing": "Dreambooth/LoRA 資料集平衡", + "Dreambooth/LoRA Folder preparation": "Dreambooth/LoRA 準備資料夾", + "Dropout caption every n epochs": "在每 N 個週期 (Epoch) 丟棄標記", + "DyLoRA model": "DyLoRA 模型", + "Dynamic method": "動態方法", + "Dynamic parameter": "動態參數", + "e.g., \"by some artist\". Leave empty if you only want to add a prefix or postfix.": "例如,\"由某個藝術家創作\"。如果你只想加入前綴或後綴,請留空白。", + "e.g., \"by some artist\". Leave empty if you want to replace with nothing.": "例如,\"由某個藝術家創作\"。如果你想用空值取代,請留空白。", + "Enable buckets": "啟用資料桶", + "Enable for Hugging Face's stabilityai models": "啟用 HuggingFace 的 stabilityai 模型", + "Enter one sample prompt per line to generate multiple samples per cycle. Optional specifiers include: --w (width), --h (height), --d (seed), --l (cfg scale), --s (sampler steps) and --n (negative prompt). To modify sample prompts during training, edit the prompt.txt file in the samples directory.": "每行輸入一個提示詞來生成每個訓練週期的輸出範本。可以選擇指定的參數,包括:--w (寬度) ,--h (高度) ,--d (種子) ,--l (CFG 比例) ,--s (採樣器步驟) 和 --n (負面提示詞) 。如果要在訓練週期中修改提示詞,請修改範本目錄中的 prompt.txt 檔案。", + "Epoch": "週期 (Epoch)", + "Error": "錯誤", + "Example of the optimizer settings for Adafactor with the fixed learning rate:": "固定學習率 Adafactor 優化器的設定範例:", + "Extract DyLoRA": "提取 DyLoRA", + "Extract LoRA model": "提取 LoRA模型", + "Extract LoRA": "提取 LoRA", + "Extract LyCORIS LoCon": "提取 LyCORIS LoCon", + "Extract LyCORIS LoCON": "提取 LyCORIS LoCON", + "FileNotFoundError": "錯誤!檔案找不到", + "Find text": "尋找文字", + "Finetune": "微調", + "Finetuned model": "微調模型", + "Finetuning Resource Guide": "微調資源指南", + "fixed": "固定", + "Flip augmentation": "翻轉增強", + "float16": "float16", + "Folders": "資料夾", + "Full bf16 training (experimental)": "完整使用 bf16 訓練 (實驗性功能)", + "Full fp16 training (experimental)": "完整使用 fp16 訓練 (實驗性功能)", + "Generate caption files for the grouped images based on their folder name": "根據圖片的資料夾名稱生成標記文字檔案", + "Generate caption metadata": "生成標記文字後設資料", + "Generate Captions": "生成標記文字", + "Generate image buckets metadata": "生成圖像分桶後設資料", + "GIT Captioning": "GIT 標記文字", + "Gradient accumulate steps": "梯度累加步數", + "Gradient checkpointing": "梯度檢查點", + "Group size": "群組大小", + "Guidelines for SDXL Finetuning": "SDXL 微調指南", + "Guides": "指南", + "How to Create a LoRA Part 1: Dataset Preparation:": "如何建立 LoRA 第 1 部份:資料集準備:", + "If unchecked, tensorboard will be used as the default for logging.": "如果不勾選,Tensorboard 將會使用預設的紀錄方式。", + "If you have valuable resources to add, kindly create a PR on Github.": "如果你有有價值的資源要增加,請在 Github 上建立一個 PR。", + "Ignore Imported Tags Above Word Count": "略過高於字數數量的標記標籤", + "Image folder to caption": "要加入標記的圖片資料夾", + "Image folder": "圖片資料夾", + "Include images in subfolders as well": "包含子資料夾中的圖片", + "Include Subfolders": "包含子資料夾", + "Init word": "初始化標記文字", + "Input folder": "輸入資料夾", + "Install Location": "安裝位置", + "Installation": "安裝", + "Instance prompt": "實例 (Instance) 提示詞", + "Keep n tokens": "保留 N 個提示詞", + "Launching the GUI on Linux and macOS": "在 Linux/macOS 上啟動 GUI", + "Launching the GUI on Windows": "在 Windows 上啟動 GUI", + "Learning rate": "學習率", + "adafactor": "自適應學習 (adafactor)", + "linear": "線性 (linear)", + "Linux and macOS Upgrade": "Linux/macOS 升級", + "Linux and macOS": "Linux/macOS", + "Linux Pre-requirements": "Linux Pre-requirements", + "Load": "載入", + "Loading...": "載入中...", + "Local docker build": "Docker 建構", + "Logging folder": "記錄資料夾", + "LoRA model \"A\"": "LoRA 模型 \"A\"", + "LoRA model \"B\"": "LoRA 模型 \"B\"", + "LoRA model \"C\"": "LoRA 模型 \"C\"", + "LoRA model \"D\"": "LoRA 模型 \"D\"", + "LoRA model": "LoRA 模型", + "LoRA network weights": "LoRA 網路權重", + "LoRA": "LoRA", + "LR number of cycles": "學習率週期數", + "LR power": "學習率乘冪", + "LR scheduler extra arguments": "學習率調度器額外參數", + "LR Scheduler": "學習率調度器", + "LR warmup (% of steps)": "學習率預熱 (% 的步數)", + "LyCORIS model": "LyCORIS 模型", + "Macos is not great at the moment.": "目前 MacOS 支援並不是很好。", + "Manual Captioning": "手動標記文字", + "Manual installation": "手動安裝", + "Max bucket resolution": "最大資料儲存桶解析度", + "Max length": "最大長度", + "Max num workers for DataLoader": "資料工作載入的最大工作數量", + "Max resolution": "最大解析度", + "Max Timestep": "最大時序步數", + "Max Token Length": "最大標記長度", + "Max train epoch": "最大訓練週期 (Epoch) 數", + "Max train steps": "最大訓練總步數", + "Maximum bucket resolution": "最大資料儲存桶解析度", + "Maximum size in pixel a bucket can be (>= 64)": "最大資料儲存桶解析度可達 (>= 64) ", + "Memory efficient attention": "高效記憶體注意力區塊處理", + "Merge LoRA (SVD)": "合併 LoRA (SVD) ", + "Merge LoRA": "合併 LoRA", + "Merge LyCORIS": "合併 LyCORIS", + "Merge model": "合併模型", + "Merge precision": "合併精度", + "Merge ratio model A": "模型 A 合併比例", + "Merge ratio model B": "模型 B 合併比例", + "Merge ratio model C": "模型 C 合併比例", + "Merge ratio model D": "模型 D 合併比例", + "Min bucket resolution": "最小資料儲存桶解析度", + "Min length": "最小長度", + "Min SNR gamma": "最小 SNR gamma", + "Min Timestep": "最小時序步數", + "Minimum bucket resolution": "最小資料儲存桶解析度", + "Minimum size in pixel a bucket can be (>= 64)": "最小資料儲存桶解析度可達 (>= 64) ", + "Mixed precision": "混合精度", + "Mnimum difference": "最小化差異", + "Mode": "模式", + "Model A merge ratio (eg: 0.5 mean 50%)": "模型 A 合併比率 (例如:0.5 指的是 50%) ", + "Model B merge ratio (eg: 0.5 mean 50%)": "模型 B 合併比率 (例如:0.5 指的是 50%) ", + "Model C merge ratio (eg: 0.5 mean 50%)": "模型 C 合併比率 (例如:0.5 指的是 50%) ", + "Model D merge ratio (eg: 0.5 mean 50%)": "模型 D 合併比率 (例如:0.5 指的是 50%) ", + "Model output folder": "模型輸出資料夾", + "Model output name": "模型輸出資料夾", + "Model Quick Pick": "快速選擇模型", + "Module dropout": "模型捨棄", + "Network Dimension (Rank)": "網路維度 (Rank)", + "Network Dimension": "網路維度", + "Network dropout": "網路捨棄", + "No module called tkinter": "沒有名稱為 tkinter 的模組", + "No token padding": "不做提示詞填充", + "Noise offset type": "噪聲偏移類型", + "Noise offset": "噪聲偏移", + "Number of beams": "beam 的數量", + "Number of CPU threads per core": "每個 CPU 核心的線程數", + "Number of images to group together": "要一起分組的圖片數量", + "Number of updates steps to accumulate before performing a backward/update pass": "執行反向/更新傳遞之前,需要累積的更新步驟數", + "object template": "物件樣版", + "Only for SD v2 models. By scaling the loss according to the time step, the weights of global noise prediction and local noise prediction become the same, and the improvement of details may be expected.": "僅適用於 SD v2 模型。通過根據時序步數的縮放損失,整體的噪聲預測與局部的噪聲預測的權重會變得相同,以此希望能改善細節。", + "Open": "開啟", + "Optimizer extra arguments": "優化器額外參數", + "Optimizer": "優化器", + "Optional: CUDNN 8.6": "可選:CUDNN 8.6", + "Original": "原始", + "Output folder": "輸出資料夾", + "Output": "輸出", + "Overwrite existing captions in folder": "覆蓋資料夾中現有的提示詞", + "Page File Limit": "分頁檔案限制", + "PagedAdamW8bit": "PagedAdamW8bit", + "PagedLion8bit": "PagedLion8bit", + "Parameters": "參數", + "path for the checkpoint file to save...": "儲存 checkpoint 檔案路徑...", + "path for the LoRA file to save...": "儲存 LoRA 檔案路徑...", + "path for the new LoRA file to save...": "儲存新 LoRA 檔案路徑...", + "path to \"last-state\" state folder to resume from": "用來繼續訓練的 \"最後狀態\" 資料夾路徑", + "Path to the DyLoRA model to extract from": "要提取 DyLoRA 模型的路徑", + "Path to the finetuned model to extract": "要提取的微調模型的路徑", + "Path to the LoRA A model": "LoRA A 模型的路徑", + "Path to the LoRA B model": "LoRA B 模型的路徑", + "Path to the LoRA C model": "LoRA C 模型的路徑", + "Path to the LoRA D model": "LoRA D 模型的路徑", + "Path to the LoRA model to verify": "要驗證的 LoRA 模型的路徑", + "Path to the LoRA to resize": "要調整大小的 LoRA 的路徑", + "Path to the LyCORIS model": "LyCORIS 模型的路徑", + "path where to save the extracted LoRA model...": "儲存提取出的 LoRA 模型的路徑...", + "Persistent data loader": "持續資料載入器", + "polynomial": "多項式 (polynomial)", + "Postfix to add to BLIP caption": "添加到 BLIP 提示詞的後綴", + "Postfix to add to caption": "添加到提示詞的後綴", + "Pre-built Runpod template": "預先建構的 Runpod 樣版", + "Prefix to add to BLIP caption": "添加到 BLIP 提示詞的前綴", + "Prefix to add to caption": "添加到提示詞的前綴", + "Prepare training data": "準備訓練資料集", + "Print training command": "印出訓練指令", + "Prior loss weight": "正規化驗證損失權重", + "Prodigy": "Prodigy", + "Provide a SD file path IF you want to merge it with LoRA files": "如果你要合併 LoRA 檔案,請提供 SD 檔案資料夾路徑", + "Provide a SD file path that you want to merge with the LyCORIS file": "請提供你想要與 LyCORIS 檔案合併的 SD 檔案資料夾路徑", + "PyTorch 2 seems to use slightly less GPU memory than PyTorch 1.": "PyTorch 2 似乎使用的 GPU 記憶體比 PyTorch 1 略少。", + "Quick Tags": "快速標記", + "Random crop instead of center crop": "使用隨機裁切 (而非中心裁切)", + "Rank dropout": "維度捨棄", + "Rate of caption dropout": "提示詞捨棄比例", + "Recommended value of 0.5 when used": "若使用時,建議使用 0.5", + "Recommended value of 5 when used": "若使用時,建議使用 5", + "recommended values are 0.05 - 0.15": "若使用時,建議使用 0.05 - 0.15", + "Regularisation folder": "正規化資料夾", + "Regularisation images": "正規化圖片", + "Repeats": "重複", + "Replacement text": "取代文字", + "Required bitsandbytes >= 0.36.0": "需要 bitsandbytes >= 0.36.0", + "Resize LoRA": "調整 LoRA 尺寸", + "Resize model": "調整模型大小", + "Resolution (width,height)": "解析度 (寬度, 高度) ", + "Resource Contributions": "資源貢獻者", + "Resume from saved training state": "從儲存的狀態繼續訓練", + "Resume TI training": "恢復 TI 訓練", + "Runpod": "Runpod", + "Sample every n epochs": "每 N 個時期 (Epoch) 進行範本取樣", + "Sample every n steps": "每 N 個步數進行範本取樣", + "Sample image generation during training": "在訓練期間生成取樣圖片", + "Sample prompts": "取樣範本提示詞提示", + "Sample sampler": "取樣範本採樣器", + "Samples": "範本", + "Save dtype": "儲存數據類型", + "Save every N epochs": "每 N 個週期 (Epoch) 儲存", + "Save every N steps": "每 N 個步驟儲存", + "Save last N steps state": "儲存最後 N 個步驟的訓練狀態", + "Save last N steps": "儲存最後 N 個步驟", + "Save precision": "儲存精度", + "Save to": "儲存到", + "Save trained model as": "儲存訓練模型為", + "Save training state": "儲存訓練狀態", + "Save": "儲存", + "Scale v prediction loss": "縮放 v 預測損失 (v prediction loss)", + "Scale weight norms": "縮放權重標準", + "SD Model": "SD 模型", + "SDXL model": "SDXL 模型", + "Set the Max resolution to at least 1024x1024, as this is the standard resolution for SDXL. ": "最大解析度最少設定為 1024x1024,因為這是 SDXL 的標準解析度。", + "Set the Max resolution to at least 1024x1024, as this is the standard resolution for SDXL.": "最大解析度最少設定為 1024x1024,因為這是 SDXL 的標準解析度。", + "Setup": "設定", + "SGDNesterov": "SGDNesterov", + "SGDNesterov8bit": "SGDNesterov8bit", + "Shuffle caption": "打亂提示詞", + "Source LoRA": "來源 LoRA", + "Source model type": "來源模型類型", + "Source model": "來源模型", + "Sparsity": "稀疏性", + "Stable Diffusion base model": "穩定擴散基礎模型", + "Stable Diffusion original model: ckpt or safetensors file": "穩定擴散原始模型:ckpt 或 safetensors 檔案", + "Start tensorboard": "啟動 TensorBoard", + "Start training": "開始訓練", + "Starting GUI Service": "啟動 GUI 服務", + "Stop tensorboard": "停止 TensorBoard", + "Stop text encoder training": "停止文字編碼器訓練", + "Stop training": "停止訓練", + "style template": "風格樣版", + "sv_fro": "sv_fro", + "Target model folder": "目標模型資料夾", + "Target model name": "目標模型名稱", + "Target model precision": "目標模型精度", + "Target model type": "目標模型類型", + "Template": "樣版", + "Text Encoder learning rate": "文字編碼器學習率", + "The fine-tuning can be done with 24GB GPU memory with the batch size of 1.": "微調可以再使用 1 個批次大小的情況下,在 24GB GPU 記憶體的狀態下完成。", + "The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.": "此 GUI 允許你設定訓練參數,並產生執行模型訓練所需要的 CLI 指令。", + "This guide is a resource compilation to facilitate the development of robust LoRA models.": "該指南是一個資源彙整,以促進強大LoRA模型的開發。", + "This section provide Dreambooth tools to help setup your dataset…": "這些選擇幫助設置自己的資料集", + "This section provide LoRA tools to help setup your dataset…": "本節提供 LoRA 工具以幫助您設置資料集...", + "This section provide Various Finetuning guides and information…": "本節提供各種微調指南和訊息", + "This utility allows quick captioning and tagging of images.": "此工具允許快速地為圖像添加標題和標籤。", + "This utility allows you to create simple caption files for each image in a folder.": "此工具允許您為資料夾中的每個圖片建立簡單的標籤文件。", + "This utility can be used to convert from one stable diffusion model format to another.": "該工具可用於將一個穩定擴散模型格式轉換為另一種格式", + "This utility can extract a DyLoRA network from a finetuned model.": "該工具可以從微調模型中提取 DyLoRA 網絡。", + "This utility can extract a LoRA network from a finetuned model.": "該工具可以從微調模型中提取 LoRA 網絡。", + "This utility can extract a LyCORIS LoCon network from a finetuned model.": "工具可以從微調模型中提取 LyCORIS LoCon 網絡。", + "This utility can merge a LyCORIS model into a SD checkpoint.": "該工具可以將 LyCORIS 模型合並到 SD 模型中。", + "This utility can merge two LoRA networks together into a new LoRA.": "該工具可以將兩個 LoRA 網絡合並為一個新的 LoRA。", + "This utility can merge up to 4 LoRA together or alternatively merge up to 4 LoRA into a SD checkpoint.": "該工具可以合並多達 4 個LoRA,或者選擇性地將多達 4 個 LoRA 合並到 SD 模型中。", + "This utility can resize a LoRA.": "該工具可以調整 LoRA 的大小。", + "This utility can verify a LoRA network to make sure it is properly trained.": "該工具可以驗證 LoRA 網絡以確保其得到適當的訓練。", + "This utility uses BLIP to caption files for each image in a folder.": "此工具使用 BLIP 為資料夾中的每張圖像添加標籤。", + "This utility will create the necessary folder structure for the training images and optional regularization images needed for the kohys_ss Dreambooth/LoRA method to function correctly.": "此工具將為 kohys_ss Dreambooth/LoRA 方法正常運行所需的訓練圖片和正規化圖片(可選)建立必要的資料夾結構。", + "This utility will ensure that each concept folder in the dataset folder is used equally during the training process of the dreambooth machine learning model, regardless of the number of images in each folder. It will do this by renaming the concept folders to indicate the number of times they should be repeated during training.": "此工具將確保在訓練 dreambooth 機器學習模型的過程中,資料集資料夾中的每個概念資料夾都將被平等地使用,無論每個資料夾中有多少圖像。它將通過重命名概念資料夾來指示在訓練期間應重覆使用它們的次數。", + "This utility will group images in a folder based on their aspect ratio.": "此工具將根據它們的縱橫比將文件夾中的圖像分組。", + "This utility will use GIT to caption files for each images in a folder.": "此工具使用 GIT 為資料夾中的每張圖像添加標籤。", + "This utility will use WD14 to caption files for each images in a folder.": "此工具使用 WD14 為資料夾中的每張圖像添加標籤。", + "Tips for SDXL training": "SDXL 訓練提示", + "Token string": "標記符號", + "Train a custom model using kohya finetune python code": "使用 kohya finetune Python 程式訓練自定義模型", + "Train a custom model using kohya train network LoRA python code…": "使用 kohya LoRA Python 程式訓練自定義模型", + "Train batch size": "訓練批次大小", + "Train Network": "訓練網絡", + "Train text encoder": "訓練文字編碼器", + "Train U-Net only.": "僅訓練 U-Net", + "Training config folder": "訓練設定資料夾", + "Training Image folder": "訓練圖片資料夾", + "Training images": "訓練圖片", + "Training steps per concept per epoch": "每個週期每個概念的訓練步驟", + "Training": "訓練", + "Troubleshooting": "故障排除", + "Tutorials": "教學", + "Unet learning rate": "UNet 學習率", + "UNet linear projection": "UNet 線性投影", + "Upgrading": "升级", + "Use --cache_text_encoder_outputs option and caching latents.": "使用 --cache_text_encoder_outputs 選項來暫存潛空間。", + "Use Adafactor optimizer. RMSprop 8bit or Adagrad 8bit may work. AdamW 8bit doesn’t seem to work.": "使用 Adafactor 優化器。 RMSprop 8bit 或 Adagrad 8bit 可能有效。 AdamW 8bit 好像無法運作。", + "Use beam search": "使用 beam 搜尋", + "Use gradient checkpointing.": "使用梯度檢查點。", + "Use latent files": "使用潛空間檔案", + "Use sparse biais": "使用使用稀疏偏差", + "Users can obtain and/or generate an api key in the their user settings on the website: https://wandb.ai/login": "使用者可以在以下網站的用戶設定中取得,或產生 API 金鑰:https://wandb.ai/login", + "V Pred like loss": "V 預測損失", + "Values greater than 0 will make the model more img2img focussed. 0 = image only": "大於 0 的數值會使模型更加聚焦在 img2img 上。0 表示僅關注於圖像生成", + "Values lower than 1000 will make the model more img2img focussed. 1000 = noise only": "小於 1000 的數值會使模型更加聚焦在 img2img 上。1000 表示僅使用噪聲生成圖片", + "Vectors": "向量", + "Verbose": "詳細輸出", + "WANDB API Key": "WANDB API 金鑰", + "WANDB Logging": "WANDB 紀錄", + "WARNING! The use of this utility on the wrong folder can lead to unexpected folder renaming!!!": "警告!在錯誤的資料夾使用此工具,可能會意外導致資料夾被重新命名!!!", + "WD14 Captioning": "WD14 提詞", + "Windows Upgrade": "Windows 升级", + "Train a custom model using kohya dreambooth python code…": "使用 kohya dreambooth Python 程式訓練自定義模型", + "Training comment": "訓練註解", + "Train a TI using kohya textual inversion python code…": "使用 kohya textual inversion Python 程式訓練 TI 模型", + "Train a custom model using kohya finetune python code…": "使用 kohya finetune Python 程式訓練自定義模型" +} \ No newline at end of file