Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guardrails support of AzureOpenAI with openai>1.0.0[bug] #576

Closed
shima-khoshraftar opened this issue Feb 9, 2024 · 12 comments
Closed

Guardrails support of AzureOpenAI with openai>1.0.0[bug] #576

shima-khoshraftar opened this issue Feb 9, 2024 · 12 comments
Labels
bug Something isn't working Stale

Comments

@shima-khoshraftar
Copy link

shima-khoshraftar commented Feb 9, 2024

Describe the bug
Does Guardrails-ai support AzureOpenAI with openai library>1.0 which has a different llm call api from openai==0.28? (openai.chat.completions.create instead of openai.ChatCompletion.create)

To Reproduce
openai.api_key = api_key
openai.azure_endpoint =azure_endpoint
openai.api_type = 'azure'
openai.api_version = api_version

raw_llm_response, validated_response,*rest = guard(
#openai.ChatCompletion.create,
openai.chat.completions.create,
prompt_params={"document": content[:6000]},
#engine="text-davinci-003",
model='gpt-35-turbo-1106',
max_tokens=2048,
temperature=0.3,
)

Expected behavior
I expect it to call llm just like it worked with openai==0.28 (openai.ChatCompletion.create) but I get this error instead:

TypeError: create() takes 1 argument(s) but 2 were given

I realized that this was fixed for openAI by setting the api_key through os.environ. But how can I do it for AzureOpenAI?

Library version:
Guardrails-ai 0.3.2
openai 1.12.0

Thanks.

@shima-khoshraftar shima-khoshraftar added the bug Something isn't working label Feb 9, 2024
@zsimjee
Copy link
Collaborator

zsimjee commented Feb 9, 2024

This seems more like it's passing too many requests to the create constructor. Can you try removing temperature and max_tokens and see if it works?

@shima-khoshraftar
Copy link
Author

shima-khoshraftar commented Feb 12, 2024

Thanks for your reply. I removed temperature and max_tokens but still have the same error. This error is similar to these issues that were previously raised for using openai>1.x:

#514 (comment)
#504

but I am wondering what the solution is for azure openai where these parameters should be set in order to call openai.chat.completions:
openai.api_type = "azure"
openai.api_version = "2023-05-15"
openai.api_base = os.environ.get("AZURE_OPENAI_API_BASE")
openai.api_key = os.environ.get("AZURE_OPENAI_API_KEY")

according to https://www.guardrailsai.com/docs/integrations/azure_openai.
I even set these parameters like this but still get the same error.
os.environ["OPENAI_API_TYPE"] = 'azure'
os.environ["OPENAI_API_VERSION"] = api_version
os.environ["AZURE_OPENAI_API_KEY"] = api_key
os.environ["AZURE_OPENAI_ENDPOINT"] = api_base

Thanks.

@Aman0509
Copy link

@shima-khoshraftar - Facing the similar issue. Could you please tell me what version of guardrails is working with openai v0.28 currently?

@shima-khoshraftar
Copy link
Author

@Aman0509 The latest release of guardrails works with Azure openai with openai==0.28.

@vijayoct27
Copy link

I am wondering if there is any update to this issue? I am also trying to use guardrails with Azure OpenAI and openai>1.0.0.

@thekaranacharya
Copy link
Contributor

thekaranacharya commented Mar 15, 2024

You can now use AzureOpenAI with guardrails using litellm. Please follow this example (just substitute with AzureOpenAI) instead. Please let us know here if there are any issues.

@vijayoct27
Copy link

Please follow this example (just substitute with AzureOpenAI) instead.
Sorry for the confusion, what example are you referring to here?

@thekaranacharya
Copy link
Contributor

thekaranacharya commented Mar 15, 2024

My bad. I thought I added the link! Just updated the comment.

@shima-khoshraftar
Copy link
Author

Thanks for letting us know about the update and sending the link. However, the link does not seem to work, It can not find examples/litellm_example.ipynb and throws a page not found error. Could you please update the link? Thanks.

@jonathanbouchet
Copy link

@shima-khoshraftar I'm also looking at this example since I would like to use guardails-ai with openai > 1
It looks like @thekaranacharya is referring to litellm package (link).
They have a quick tutorial here

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Aug 22, 2024
Copy link

github-actions bot commented Sep 5, 2024

This issue was closed because it has been stalled for 14 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

6 participants