Skip to content

Commit

Permalink
update doc and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
helloyongyang committed Sep 2, 2024
1 parent b7f8149 commit 7a4d86d
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 2 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@

## News

- **Sep 3, 2024:** 🚀 We support opencompass to eval llmc model. Follow this [doc](https://llmc-en.readthedocs.io/en/latest/advanced/model_test_v2.html) and have a try!

- **Aug 22, 2024:** 🔥We support lots of small language models, including current SOTA [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)(see [Supported Model List](#supported-model-list)). Additionally, we also support down stream task evaluation through our modified [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) 🤗. Specifically, people can first employ `save_trans` mode(see `save` part in [Configuration](#configuration)) to save a weight modified model. After obtaining the transformed model, they can directly evaluate the quantized model referring to [run_lm_eval.sh](scripts/run_lm_eval.sh). More details can be found in [here](https://llmc-en.readthedocs.io/en/latest/advanced/model_test.html).

- **Jul 23, 2024:** 🍺🍺🍺 We release a brand new version benchmark paper:
Expand Down
2 changes: 2 additions & 0 deletions README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@

## ニュース

- **Sep 3, 2024:** 🚀 私たちはOpenCompassの精度評価をサポートしました。ドキュメントは[こちら](https://llmc-en.readthedocs.io/en/latest/advanced/model_test_v2.html)を参照してください。ぜひご利用ください!

- **2024年8月22日:** 🔥私たちは、現在の最先端技術である[SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)[サポートされているモデルリスト](#supported-model-list)を参照)を含む多くの小型言語モデルをサポートしています。さらに、改良された[lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) 🤗 を通じてダウンストリームタスクの評価もサポートしています。具体的には、まず `save_trans` モード([設定](#設定)`save` 部分を参照)を使用して、変更されたモデルの重みを保存します。変換後のモデルを取得した後、[run_lm_eval.sh](scripts/run_lm_eval.sh)を参照して量子化モデルを直接評価することができます。詳細は[こちら](https://llmc-en.readthedocs.io/en/latest/advanced/model_test.html)で確認できます。

- **2024 年 7 月 23 日:** 🍺🍺🍺 新しいバージョンのベンチマーク ペーパーをリリースします:
Expand Down
2 changes: 2 additions & 0 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@

## 新闻

- **Sep 3, 2024:** 🚀 我们支持了opencompass的精度评测。文档参考[这里](https://llmc-zhcn.readthedocs.io/en/latest/advanced/model_test_v2.html)。欢迎使用!

- **2024年8月22日:** 🔥我们支持包括当前最先进的 [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)(请参阅 [支持的模型列表](#supported-model-list))在内的许多小型语言模型。此外,我们还通过修改后的[lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) 🤗 支持下游任务评估。具体来说,人们可以首先使用`save_trans`模式(请参阅 [配置](#配置) 中的 `save` 部分)来保存修改后的模型权重。获取转换后的模型后,可以直接参考 [run_lm_eval.sh](scripts/run_lm_eval.sh)来评估量化模型。更多详情可在[这里](https://llmc-zhcn.readthedocs.io/en/latest/advanced/model_test.html#id2)找到。

- **2024 年 7 月 23 日:** 🍺🍺🍺 我们发布了全新版本的基准论文:
Expand Down
2 changes: 1 addition & 1 deletion docs/en/source/advanced/model_test_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ According to the opencompass [documentation](https://opencompass.readthedocs.io/

Finally, you can load the above configuration and perform model compression and accuracy testing just like running a regular llmc program.

## Note:
## Multi-GPU parallel test

If the model is too large to fit on a single GPU for evaluation, and multi-GPU evaluation is needed, we support using pipeline parallelism when running opencompass.

Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/source/advanced/model_test_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ pip install human-eval

最后你就可以像运行一个正常的llmc程序一样,载入上述的config,进行模型压缩和精度测试

## 注意:
## 多卡并行测试

如果模型太大,单卡评测放不下,需要使用多卡评测精度,我们支持在运行opencompass时使用pipeline parallel。

Expand Down

0 comments on commit 7a4d86d

Please sign in to comment.