Skip to content
/ mamba.c Public
forked from kroggen/mamba.c

Inference of Mamba models in pure C

Notifications You must be signed in to change notification settings

xhedit/mamba.c

 
 

Repository files navigation

mamba.c

Mamba C

中文 | 日本語 | Русский

Inference of Mamba models in pure C

Inspired by and using code from llama2.c

This implements only the recurrent mode of Mamba SSM

You can compare it with the related pytorch implementation

No support for batches. The code is minimal for learning purposes.

Even so, it is faster than pytorch on CPU!!!

Fast Start

python3 tokenizer.py
python3 export.py state-spaces/mamba-130m model.bin
make fast
./mamba model.bin -n 20 -i "Customer Support should" -t 0.0

Python is only used to export the tokenizer and the model to a simpler format (requires transformers and pytorch)

You can select another model on the export part

Models

You can use these models stored on HuggingFace:

  • state-spaces/mamba-130m
  • state-spaces/mamba-370m
  • state-spaces/mamba-790m
  • state-spaces/mamba-1.4b
  • state-spaces/mamba-2.8b
  • state-spaces/mamba-2.8b-slimpj

You can specify the model name as an argument to the export.py script

Note that the export script will download the model (if it's not already downloaded) to the hugingface cache directory.

Optionally you can also specify the path to the model file, if you downloaded it manually. Example:

wget https://huggingface.co/state-spaces/mamba-130m/resolve/main/config.json?download=true -O config.json
wget https://huggingface.co/state-spaces/mamba-130m/resolve/main/pytorch_model.bin?download=true -O pytorch_model.bin
python3 export.py . model.bin

Internal State

As it is a recurrent model, it is possible to save the internal state and then return to that state later

To get a copy of the internal state:

  int state_size;
  char* state = get_internal_state(mamba, &state_size);

To set the internal state:

  set_internal_state(mamba, state, state_size);

Branches

The code is available on 3 versions, each on a separate branch:

  • learning - very basic
  • fused - fuse the basic functions into bigger ones (you can compare them)
  • cuda - simple GPU implementation, easy to understand

Notes

The tokenizer may need some more work for special characters

Feel free to contribute and send a PR

License

MIT

About

Inference of Mamba models in pure C

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 82.3%
  • Python 14.2%
  • Makefile 3.4%
  • Batchfile 0.1%