forked from haotian-liu/LLaVA
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
9 changed files
with
262 additions
and
10 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# Finetune LLaVA on Custom Datasets | ||
|
||
## Dataset Format | ||
|
||
Convert your data to a JSON file of a List of all samples. Sample metadata should contain `id` (a unique identifier), `image` (the path to the image), and `conversations` (the conversation data between human and AI). | ||
|
||
A sample JSON for finetuning LLaVA for generating tag-style captions for Stable Diffusion: | ||
|
||
```json | ||
[ | ||
{ | ||
"id": "997bb945-628d-4724-b370-b84de974a19f", | ||
"image": "part-000001/997bb945-628d-4724-b370-b84de974a19f.jpg", | ||
"conversations": [ | ||
{ | ||
"from": "human", | ||
"value": "<image>\nWrite a prompt for Stable Diffusion to generate this image." | ||
}, | ||
{ | ||
"from": "gpt", | ||
"value": "a beautiful painting of chernobyl by nekro, pascal blanche, john harris, greg rutkowski, sin jong hun, moebius, simon stalenhag. in style of cg art. ray tracing. cel shading. hyper detailed. realistic. ue 5. maya. octane render. " | ||
}, | ||
] | ||
}, | ||
... | ||
] | ||
``` | ||
|
||
## Command | ||
|
||
If you have a limited task-specific data, we recommend finetuning from LLaVA checkpoints with LoRA following this [script](https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune_task_lora.sh). | ||
|
||
You may need to adjust the hyperparameters to fit each specific dataset and your hardware constraint. | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
#!/bin/bash | ||
|
||
deepspeed llava/train/train_mem.py \ | ||
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ | ||
--deepspeed ./scripts/zero3.json \ | ||
--model_name_or_path lmsys/vicuna-13b-v1.5 \ | ||
--version v1 \ | ||
--data_path ./playground/data/llava_v1_5_mix665k.json \ | ||
--image_folder ./playground/data \ | ||
--vision_tower openai/clip-vit-large-patch14-336 \ | ||
--pretrain_mm_mlp_adapter ./checkpoints/llava-v1.5-13b-pretrain/mm_projector.bin \ | ||
--mm_projector_type mlp2x_gelu \ | ||
--mm_vision_select_layer -2 \ | ||
--mm_use_im_start_end False \ | ||
--mm_use_im_patch_token False \ | ||
--image_aspect_ratio pad \ | ||
--group_by_modality_length True \ | ||
--bf16 True \ | ||
--output_dir ./checkpoints/llava-v1.5-13b-lora \ | ||
--num_train_epochs 1 \ | ||
--per_device_train_batch_size 16 \ | ||
--per_device_eval_batch_size 4 \ | ||
--gradient_accumulation_steps 1 \ | ||
--evaluation_strategy "no" \ | ||
--save_strategy "steps" \ | ||
--save_steps 50000 \ | ||
--save_total_limit 1 \ | ||
--learning_rate 2e-4 \ | ||
--weight_decay 0. \ | ||
--warmup_ratio 0.03 \ | ||
--lr_scheduler_type "cosine" \ | ||
--logging_steps 1 \ | ||
--tf32 True \ | ||
--model_max_length 2048 \ | ||
--gradient_checkpointing True \ | ||
--dataloader_num_workers 4 \ | ||
--lazy_preprocess True \ | ||
--report_to wandb |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
#!/bin/bash | ||
|
||
deepspeed llava/train/train_mem.py \ | ||
--deepspeed ./scripts/zero3.json \ | ||
--model_name_or_path liuhaotian/llava-v1.5-13b \ | ||
--version v1 \ | ||
--data_path ./playground/data/llava_v1_5_mix665k.json \ | ||
--image_folder ./playground/data \ | ||
--vision_tower openai/clip-vit-large-patch14-336 \ | ||
--mm_projector_type mlp2x_gelu \ | ||
--mm_vision_select_layer -2 \ | ||
--mm_use_im_start_end False \ | ||
--mm_use_im_patch_token False \ | ||
--image_aspect_ratio pad \ | ||
--group_by_modality_length True \ | ||
--bf16 True \ | ||
--output_dir ./checkpoints/llava-v1.5-13b-task \ | ||
--num_train_epochs 1 \ | ||
--per_device_train_batch_size 16 \ | ||
--per_device_eval_batch_size 4 \ | ||
--gradient_accumulation_steps 1 \ | ||
--evaluation_strategy "no" \ | ||
--save_strategy "steps" \ | ||
--save_steps 50000 \ | ||
--save_total_limit 1 \ | ||
--learning_rate 2e-5 \ | ||
--weight_decay 0. \ | ||
--warmup_ratio 0.03 \ | ||
--lr_scheduler_type "cosine" \ | ||
--logging_steps 1 \ | ||
--tf32 True \ | ||
--model_max_length 2048 \ | ||
--gradient_checkpointing True \ | ||
--dataloader_num_workers 4 \ | ||
--lazy_preprocess True \ | ||
--report_to wandb |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
#!/bin/bash | ||
|
||
deepspeed llava/train/train_mem.py \ | ||
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ | ||
--deepspeed ./scripts/zero3.json \ | ||
--model_name_or_path liuhaotian/llava-v1.5-13b \ | ||
--version v1 \ | ||
--data_path ./playground/data/llava_v1_5_mix665k.json \ | ||
--image_folder ./playground/data \ | ||
--vision_tower openai/clip-vit-large-patch14-336 \ | ||
--mm_projector_type mlp2x_gelu \ | ||
--mm_vision_select_layer -2 \ | ||
--mm_use_im_start_end False \ | ||
--mm_use_im_patch_token False \ | ||
--image_aspect_ratio pad \ | ||
--group_by_modality_length True \ | ||
--bf16 True \ | ||
--output_dir ./checkpoints/llava-v1.5-13b-task-lora \ | ||
--num_train_epochs 1 \ | ||
--per_device_train_batch_size 16 \ | ||
--per_device_eval_batch_size 4 \ | ||
--gradient_accumulation_steps 1 \ | ||
--evaluation_strategy "no" \ | ||
--save_strategy "steps" \ | ||
--save_steps 50000 \ | ||
--save_total_limit 1 \ | ||
--learning_rate 2e-4 \ | ||
--weight_decay 0. \ | ||
--warmup_ratio 0.03 \ | ||
--lr_scheduler_type "cosine" \ | ||
--logging_steps 1 \ | ||
--tf32 True \ | ||
--model_max_length 2048 \ | ||
--gradient_checkpointing True \ | ||
--dataloader_num_workers 4 \ | ||
--lazy_preprocess True \ | ||
--report_to wandb |