diff --git a/README.md b/README.md index e70c97e440..5f47045813 100644 --- a/README.md +++ b/README.md @@ -69,6 +69,8 @@ We also provide a some command line based examples using state of the art models performance larger than all publicly available 13b models as of 2023-09-28. - [StarCoder](./candle-examples/examples/bigcode/): LLM specialized to code generation. - [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion. +- [Yi-6B / Yi-34B](./candle-examples/examples/yi/): two bilingual + (English/Chinese) general LLMs with 6b and 34b parameters. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). @@ -174,6 +176,7 @@ If you have an addition to this list, please submit a pull request. - StableLM-3B-4E1T. - Replit-code-v1.5-3B. - Bert. + - Yi-6B and Yi-34B. - Text to text. - T5 and its variants: FlanT5, UL2, MADLAD400 (translation), CoEdit (Grammar correction). - Marian MT (Machine Translation).