Skip to content

Commit

Permalink
Increase memory limit for rolling batch integration octocoder model (#…
Browse files Browse the repository at this point in the history
  • Loading branch information
xyang16 authored Nov 15, 2023
1 parent f0ea80b commit 245d770
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion tests/integration/llm/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ def get_model_name():
"stream_output": True
},
"octocoder": {
"max_memory_per_gpu": [20.0],
"max_memory_per_gpu": [23.0],
"batch_size": [1],
"seq_length": [64, 128, 256],
"stream_output": True
Expand Down

0 comments on commit 245d770

Please sign in to comment.