Skip to content

LBANN/llama

Repository files navigation

LBANN: Livermore Big Artificial Neural Network Toolkit

The Livermore Big Artificial Neural Network toolkit (LBANN) is an open-source, HPC-centric, deep learning training framework that is optimized to compose multiple levels of parallelism.

LBANN provides model-parallel acceleration through domain decomposition to optimize for strong scaling of network training. It also allows for composition of model-parallelism with both data parallelism and ensemble training methods for training large neural networks with massive amounts of data. LBANN is able to advantage of tightly-coupled accelerators, low-latency high-bandwidth networking, and high-bandwidth parallel file systems.

LLaMa Repository

Distributed implementation of the LLaMa 3.x model. Optimzied to allow both pipeline and tensor parallel inference execution using PyTorch.

Publications

A list of publications, presentations and posters are shown here.

Reporting issues

Issues, questions, and bugs can be raised on the Github issue tracker.

About

Parallel implementation of the LLaMA models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages