Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code base for python script requires migration to openai 1.X #15

Open
Lukaspor opened this issue Mar 9, 2024 · 1 comment
Open

Code base for python script requires migration to openai 1.X #15

Lukaspor opened this issue Mar 9, 2024 · 1 comment

Comments

@Lukaspor
Copy link

Lukaspor commented Mar 9, 2024

After asking a question, the following error occurred:

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

To remediate this, I had to use several tools to migrate the code base in glados.py that resulted in the following changes:

./glados.py
     import signal
     import subprocess
     import whisper
    -import openai
    +from openai import OpenAI
    +
    +client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
     import random
     import torch
     from utils.tools import prepare_text
    -openai.api_key = os.environ.get('OPENAI_API_KEY')
     def speech_to_text(stt_model):
         # if recording stops too early or late mess with vad_mode sample_rate and silence_seconds
         vad_mode = 3
             print(conversation_history)

         if use_gpt:
    -        full_response = openai.ChatCompletion.create(
    -            model="gpt-3.5-turbo",
    -            messages=[ {"role": "system", "content": conversation_history} ],
    -            temperature=0.7,
    -            max_tokens=1024,
    -            top_p=1,
    -            )
    +        full_response = client.chat.completions.create(model="gpt-3.5-turbo",
    +        messages=[ {"role": "system", "content": conversation_history} ],
    +        temperature=0.7,
    +        max_tokens=1024,
    +        top_p=1)
             # Extract the response text from the API response
             message = full_response.choices[0].message.content.strip()
         else:

Just placing this here for information if anyone else runs into this.

@Edzero
Copy link

Edzero commented Mar 31, 2024

Your posts have been very helpful, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants