Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple function calling #118

Closed
rgbkrk opened this issue Nov 30, 2023 · 7 comments · Fixed by #122
Closed

Support multiple function calling #118

rgbkrk opened this issue Nov 30, 2023 · 7 comments · Fixed by #122

Comments

@rgbkrk
Copy link
Owner

rgbkrk commented Nov 30, 2023

As requested in MeetKai/functionary#59 (comment)

we are training a new functionary model with the ability to call multiple functions in parallel; it is similar to OpenAI parallel function call. Hope that Chatlab will support this soon ?

@rgbkrk
Copy link
Owner Author

rgbkrk commented Nov 30, 2023

Example from Parallel Function Calling

    response = client.chat.completions.create(
        model="gpt-3.5-turbo-1106",
        messages=messages,
        tools=tools,
        tool_choice="auto",  # auto is default, but we'll be explicit
    )
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls
    # Step 2: check if the model wanted to call a function
    if tool_calls:
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        messages.append(response_message)  # extend conversation with assistant's reply
        # Step 4: send the info for each function call and function response to the model
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                location=function_args.get("location"),
                unit=function_args.get("unit"),
            )
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )  # extend conversation with function response
        second_response = client.chat.completions.create(
            model="gpt-3.5-turbo-1106",
            messages=messages,
        )  # get a new response from the model where it can see the function response
        return second_response

@musab-mk
Copy link

Hi, Do you have any updates on this? We would love to add chatlab's parallel calls into the functionary. Thanks!

@rgbkrk
Copy link
Owner Author

rgbkrk commented Jan 16, 2024

On it today, thanks for the bump @musab-mk. #122

@rgbkrk
Copy link
Owner Author

rgbkrk commented Jan 16, 2024

1.3.0 has this in. I'll work on a PR for functionary next. Note that I will use the llama.cpp setup instead due to being on a Mac for local testing.

@musabgultekin
Copy link

Thank you @rgbkrk !

@rgbkrk
Copy link
Owner Author

rgbkrk commented Jan 16, 2024

Looking great with functionary!

image

@rgbkrk
Copy link
Owner Author

rgbkrk commented Jan 17, 2024

PR made in MeetKai/functionary#93

I'll look into how to support this well for the in-notebook Chat as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants