Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[q] Cohere async? Input should be a valid string [type=string_type, input_value=[cohere.Generation { #570

Closed
llermaly opened this issue Feb 6, 2024 · 2 comments
Labels

Comments

@llermaly
Copy link

llermaly commented Feb 6, 2024

Hi , what's the best way to run Guard using cohere async mode?

I tried this with no luck:

async def generate_response(prompt_template: str, prompt_params: dict, guard_model: BaseModel, model_name: str = 'command', max_tokens: int = 1024, temperature: float = 0.0):
    async with cohere.AsyncClient(api_key=os.getenv("COHERE_API_KEY")) as co:
        guard = gd.Guard.from_pydantic(guard_model, prompt=prompt_template)
        response = await guard(
            co.generate,
            prompt_params=prompt_params,
            model=model_name,
            max_tokens=max_tokens,
            temperature=temperature
        )
        return response

This was the error I had:

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/gustavollermalylarrain/Documents/proyectos/labs/test-cohere/backend-python/.venv/lib/python3.11/site-packages/guardrails/llm_providers.py", line 542, in call
result = await self.invoke_llm(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gustavollermalylarrain/Documents/proyectos/labs/test-cohere/backend-python/.venv/lib/python3.11/site-packages/guardrails/llm_providers.py", line 709, in invoke_llm
return LLMResponse(
^^^^^^^^^^^^
File "/Users/gustavollermalylarrain/Documents/proyectos/labs/test-cohere/backend-python/.venv/lib/python3.11/site-packages/pydantic/main.py", line 164, in init
pydantic_self.pydantic_validator.validate_python(data, self_instance=pydantic_self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for LLMResponse
output
Input should be a valid string [type=string_type, input_value=[cohere.Generation {
id:...ken_likelihoods: None
}], input_type=Generations]
For further information visit https://errors.pydantic.dev/2.4/v/string_type

I tried doing this:

async def generate_response(prompt_template: str, prompt_params: dict, guard_model: BaseModel, model_name: str = 'command', max_tokens: int = 1024, temperature: float = 0.0):
    async with cohere.AsyncClient(api_key=os.getenv("COHERE_API_KEY")) as co:
        guard = gd.Guard.from_pydantic(guard_model, prompt=prompt_template)

        async def cohere_generate_wrapper(prompt: str, **kwargs) -> str:
            response = await co.generate(prompt=prompt, **kwargs)
            return response.generations[0].text

        response = await guard(
            cohere_generate_wrapper,
            prompt_params=prompt_params,
            model=model_name,
            max_tokens=max_tokens,
            temperature=temperature
        )
        return response

And works, but I wonder if I'm doing it the right way

Thanks

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Aug 22, 2024
Copy link

github-actions bot commented Sep 5, 2024

This issue was closed because it has been stalled for 14 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant