Skip to content

Commit

Permalink
Scaffolding for gpt-fast example
Browse files Browse the repository at this point in the history
  • Loading branch information
irfansharif committed Dec 5, 2023
1 parent bb2bc44 commit 073e470
Show file tree
Hide file tree
Showing 11 changed files with 2,390 additions and 5 deletions.
459 changes: 459 additions & 0 deletions 06_gpu_and_ml/gpt-fast/GPTQ.py

Large diffs are not rendered by default.

11 changes: 11 additions & 0 deletions 06_gpu_and_ml/gpt-fast/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
Copyright 2023 Meta

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
87 changes: 87 additions & 0 deletions 06_gpu_and_ml/gpt-fast/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# gpt-fast on Modal

This is a demo of https://github.com/pytorch-labs/gpt-fast running on
[Modal](https://modal.com). It demonstrates how to use speculative sampling,
quantized models, and pytorch compilation to achieve upwards of 125 tokens/s on
7B models running on individual A100 80GB GPUs. It's a multi-file Modal app
that integrates into an existing codebase (files other than `modal.py` were
mostly taken as-is from `pytorch-labs/gpt-fast`), makes of container-lifecyle
primitives, and is also able to invoke already-deployed functions through code.

TODO:
- [ ] Make use of GPU checkpointing to avoid long cold starts.
- [ ] Doc-ify modal.py, publish to website.
- [ ] Make use of draft models for speculative sampling.
- [ ] Run them on secondary GPUs?
- [ ] Make use of tensor parallelism.

To run one-off inference:
```
۩ modal run gpt-fast.modal::main --prompt "Implement fibonacci in python"
\ --no-compile-model
...
Loading model weights ...
Using int8 weight-only quantization!
Loading model weights took 11.08 seconds
Starting inference for prompt = 'Implement fibonacci in python'
with memoization.
The time complexity should be O(n)
The space complexity should be O(n)
"""
def fibonacci(n, mem=dict()):
if n == 0:
return 0
if n == 1:
return 1
if n in mem:
return mem[n]
Time for inference 1: 13.24 sec total, 7.55 tokens/sec
Bandwidth achieved: 51.91 GB/s
...
```

Compile the model for faster inference, at the cost of much longer cold-starts:
```
۩ modal run gpt-fast.modal::main --prompt "Implement fibonacci in python" \
--compile-model
...
Running warmup inference ...
Model compilation time: 298.49 seconds
Starting inference for prompt = 'Implement fibonacci in python'
...
Time for inference 1: 0.81 sec total, 123.54 tokens/sec
Bandwidth achieved: 856.83 GB/s
```

```
۩ modal run gpt-fast.modal::main --help
Usage: modal run gpt-fast.modal::main [OPTIONS]
Options:
--interactive / --no-interactive
--top-k INTEGER
--temperature FLOAT
--speculate-k INTEGER
--max-new-tokens INTEGER
--num-samples INTEGER
--prompt TEXT
--use-speculative-sampling / --no-use-speculative-sampling
--compile-prefill / --no-compile-prefill
--compile-model / --no-compile-model
--use-base-model / --no-use-base-model
--lookup-existing / --no-lookup-existing
--help Show this message and exit.
```

Deploy the model and run inference against a container that's already compiled
the pytorch model:
```
۩ modal deploy gpt-fast.modal
۩ modal run gpt-fast.modal::main --lookup-existing --prompt "Implement fibonacci in python"
...
Time for inference 1: 0.89 sec total, 111.77 tokens/sec
Bandwidth achieved: 775.16 GB/s
```
Loading

0 comments on commit 073e470

Please sign in to comment.