Skip to content

Commit

Permalink
Unified the max tokens for all LLMs
Browse files Browse the repository at this point in the history
Summary:
Currently, we are set max tokens for API case by case, so define a macro to unify all API usages.

Also, bump up the max token of OPENAI from 100 to 1000.

Reviewed By: joshsaxe

Differential Revision: D51922250

fbshipit-source-id: 742e444cae58ea10327e338c9c5bc7db65b038dd
  • Loading branch information
SimonWan authored and facebook-github-bot committed Dec 7, 2023
1 parent 304809e commit b4d179c
Showing 1 changed file with 5 additions and 3 deletions.
8 changes: 5 additions & 3 deletions CybersecurityBenchmarks/benchmark/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@

NUM_LLM_RETRIES = 10

MAX_TOKENS = 1000

LOG: logging.Logger = logging.getLogger(__name__)


Expand Down Expand Up @@ -125,7 +127,7 @@ def query(self, prompt: str) -> str:
messages=[
{"role": "user", "content": prompt},
],
max_tokens=100,
max_tokens=MAX_TOKENS,
)
logging.getLogger().setLevel(level)
return response.choices[0].message.content
Expand All @@ -149,7 +151,7 @@ def query(self, prompt: str) -> str:
model_name=self.model,
temperature=0.75,
top_p=1,
model_kwargs={"max_new_tokens": 1000},
model_kwargs={"max_new_tokens": MAX_TOKENS},
)
response = llm(prompt)
return "".join(response)
Expand All @@ -175,7 +177,7 @@ def query(self, prompt: str) -> str:
model_name=self.model,
temperature=0.75,
top_p=1,
max_tokens=1000,
max_tokens=MAX_TOKENS,
together_api_key=self.api_key,
)
response = llm(prompt)
Expand Down

0 comments on commit b4d179c

Please sign in to comment.