From 67c7c07e7953d28b676d7fd2d6643099592cbd0b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Yusuf=20Sar=C4=B1g=C3=B6z?= Date: Thu, 12 Oct 2023 07:57:36 +0300 Subject: [PATCH 1/2] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index bc55ad311..3a2d05675 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ ## Release +- [10/12] LLaVA is now supported in [llama.cpp](https://github.com/ggerganov/llama.cpp/pull/3436) with 4-bit / 5-bit quantization support! - [10/11] The training data and scripts of LLaVA-1.5 are released [here](https://github.com/haotian-liu/LLaVA#train), and evaluation scripts are released [here](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md)! - [10/5] 🔥 LLaVA-1.5 is out! Achieving SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out the [technical report](https://arxiv.org/abs/2310.03744), and explore the [demo](https://llava.hliu.cc/)! Models are available in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). - [9/26] LLaVA is improved with reinforcement learning from human feedback (RLHF) to improve fact grounding and reduce hallucination. Check out the new SFT and RLHF checkpoints at project [[LLavA-RLHF]](https://llava-rlhf.github.io/) From f4adcdda3814ff39726fca880cd172970b91e304 Mon Sep 17 00:00:00 2001 From: Subash-Lamichhane <077bct081.subash@pcampus.edu.np> Date: Thu, 12 Oct 2023 21:54:53 +0545 Subject: [PATCH 2/2] fixed typo in docs/LLaVA_from_LLaMA2 --- docs/LLaVA_from_LLaMA2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/LLaVA_from_LLaMA2.md b/docs/LLaVA_from_LLaMA2.md index b4163668a..214754bf2 100644 --- a/docs/LLaVA_from_LLaMA2.md +++ b/docs/LLaVA_from_LLaMA2.md @@ -4,7 +4,7 @@ :llama: **-Introduction-** [Llama 2 is an open-source LLM released by Meta AI](https://about.fb.com/news/2023/07/llama-2/) today (July 18, 2023). Compared with its early version [Llama 1](https://ai.meta.com/blog/large-language-model-llama-meta-ai/), Llama 2 is more favored in ***stronger language performance***, ***longer context window***, and importantly ***commercially usable***! While Llama 2 is changing the LLM market landscape in the language space, its multimodal ability remains unknown. We quickly develop the LLaVA variant based on the latest Llama 2 checkpoints, and release it to the community for the public use. -You need to apply for and download the lastest Llama 2 checkpoints to start your own training (apply [here](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)) +You need to apply for and download the latest Llama 2 checkpoints to start your own training (apply [here](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)) ## Training