Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Ollama and other LLMs #84

Closed
iSevenDays opened this issue Sep 26, 2024 · 4 comments
Closed

Support Ollama and other LLMs #84

iSevenDays opened this issue Sep 26, 2024 · 4 comments

Comments

@iSevenDays
Copy link
Contributor

I like the idea of your project, and I hope you add support of Ollama and other LLMs.
I've just checked the project to see if I can use it and unfortunately no.

I followed your example https://motleycrew.readthedocs.io/en/latest/examples/research_agent.html and wanted to try it out with llama3.1 and other LLMs.

Here are examples why it is not possible right now:

class QuestionTask(Task):
    """Task to generate subquestions based on a given question."""

    def __init__(
        self,
        question: str,
        query_tool: MotleyTool,
        crew: MotleyCrew,
        max_iter: int = 10,
        allow_async_units: bool = False,
        name: str = "QuestionTask",
    ):
        super().__init__(
            name=name,
            task_unit_class=QuestionGenerationTaskUnit,
            crew=crew,
            allow_async_units=allow_async_units, <-- 'llm' parameter is missing
        )

        self.max_iter = max_iter
        self.n_iter = 0
        self.question = Question(question=question)
        self.graph_store.insert_node(self.question)
        self.question_prioritization_tool = QuestionPrioritizerTool()
        self.question_generation_tool = QuestionGeneratorTool(
            query_tool=query_tool, graph=self.graph_store <-- 'llm' parameter is missing
        )
class QuestionGeneratorTool(MotleyTool):
    """
    Gets a question as input
    Retrieves relevant docs (llama index basic RAG)
    (Retrieves existing questions from graph (to avoid overlap))
    Generates extra questions (research agent prompt)

    Adds questions as children of current q by calling Q insertion tool once
    exits
    """

    def __init__(
        self,
        query_tool: MotleyTool,
        graph: MotleyGraphStore,
        max_questions: int = 3,
        llm: Optional[BaseLanguageModel] = None,

Related example

class SimpleRetrieverTool(MotleyTool):
...
tool = make_retriever_langchain_tool(
            data_dir, persist_dir, return_strings_only=return_strings_only
        )


def make_retriever_langchain_tool(data_dir, persist_dir, return_strings_only: bool = False):
    text_embedding_model = "text-embedding-ada-002"
    embeddings = OpenAIEmbedding(model=text_embedding_model)

OpenAI hard coded all over the code.

@whimo
Copy link
Contributor

whimo commented Sep 27, 2024

Hi @iSevenDays and thank you for raising the issue, that's a fair point. While at its core motleycrew supports Ollama and various cloud providers, the research agent project was built a while ago, when we didn't yet concern ourselves with custom LLMs.

I'll get back to you today with a fix.

@whimo
Copy link
Contributor

whimo commented Sep 27, 2024

I merged the version that allows you to specify a custom LLM and embeddings model. If you have any further questions, feel free to comment :)

#86

@iSevenDays
Copy link
Contributor Author

Hi @whimo thanks for the update!
I added local embeddings example #87

Usage:

from llama_index.embeddings.ollama import OllamaEmbedding

ollama_embedding = OllamaEmbedding(
    model_name="nomic-embed-text",
    base_url="http://localhost:11434",
    ollama_additional_kwargs={"mirostat": 0},
)
llm = init_llm(LLMFramework.LANGCHAIN, llm_name="llama3.1")

However, I still get an error that OpenAI API Key is required when executing the code below. Maybe you can help what I should do differently?

# And now run the recipes
done_items = crew.run()
AuthenticationError                       Traceback (most recent call last)
Cell In[10], line 2
      1 # And now run the recipes
----> 2 done_items = crew.run()

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/crew/crew.py:51, in MotleyCrew.run(self)
     49 """Run the crew."""
     50 if self.async_backend == AsyncBackend.NONE:
---> 51     result = self._run_sync()
     52 elif self.async_backend == AsyncBackend.ASYNCIO:
     53     try:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/crew/crew.py:185, in MotleyCrew._run_sync(self)
    182 available_tasks = self.get_available_tasks()
    183 logger.info("Available tasks: %s", available_tasks)
--> 185 for agent, task, unit in self._prepare_next_unit_for_dispatch(set()):
    186     result = agent.invoke(unit.as_dict())
    188     self._handle_task_unit_completion(
    189         task=task,
    190         unit=unit,
   (...)
    193         done_units=done_units,
    194     )

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/crew/crew.py:123, in MotleyCrew._prepare_next_unit_for_dispatch(self, running_sync_tasks)
    119     continue
    121 logger.info("Processing task: %s", task)
--> 123 next_unit = task.get_next_unit()
    125 if next_unit is None:
    126     logger.info("Got no matching units for task %s", task)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/applications/research_agent/question_task.py:67, in QuestionTask.get_next_unit(self)
     64 if not len(question_candidates):
     65     return None
---> 67 most_pertinent_question = self.question_prioritization_tool.invoke(
     68     {
     69         "original_question": self.question,
     70         "unanswered_questions": question_candidates,
     71     }
     72 )
     73 logger.info("Most pertinent question according to the tool: %s", most_pertinent_question)
     74 return QuestionGenerationTaskUnit(question=most_pertinent_question)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/tools/tool.py:134, in MotleyTool.invoke(self, input, config, **kwargs)
    128 def invoke(
    129     self,
    130     input: Union[str, Dict],
    131     config: Optional[RunnableConfig] = None,
    132     **kwargs: Any,
    133 ) -> Any:
--> 134     return self.tool.invoke(input=input, config=config, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:397, in BaseTool.invoke(self, input, config, **kwargs)
    390 def invoke(
    391     self,
    392     input: Union[str, Dict, ToolCall],
    393     config: Optional[RunnableConfig] = None,
    394     **kwargs: Any,
    395 ) -> Any:
    396     tool_input, kwargs = _prep_run_args(input, config, **kwargs)
--> 397     return self.run(tool_input, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:586, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)
    584 if error_to_raise:
    585     run_manager.on_tool_error(error_to_raise)
--> 586     raise error_to_raise
    587 output = _format_output(content, artifact, tool_call_id, self.name, status)
    588 run_manager.on_tool_end(output, color=color, name=self.name, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:555, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)
    553 if config_param := _get_runnable_config_param(self._run):
    554     tool_kwargs[config_param] = config
--> 555 response = context.run(self._run, *tool_args, **tool_kwargs)
    556 if self.response_format == "content_and_artifact":
    557     if not isinstance(response, tuple) or len(response) != 2:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/tools/tool.py:116, in MotleyTool._patch_tool_run.<locals>.patched_run(*args, **kwargs)
    113 @functools.wraps(original_run)
    114 def patched_run(*args, **kwargs):
    115     try:
--> 116         result = original_run(*args, **kwargs)
    117         if self.return_direct:
    118             raise DirectOutput(result)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/structured.py:69, in StructuredTool._run(self, config, run_manager, *args, **kwargs)
     67     if config_param := _get_runnable_config_param(self.func):
     68         kwargs[config_param] = config
---> 69     return self.func(*args, **kwargs)
     70 raise NotImplementedError("StructuredTool does not support sync invocation.")

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/applications/research_agent/question_prioritizer.py:100, in create_question_prioritizer_langchain_tool.<locals>.<lambda>(original_question, unanswered_questions)
     84     return unanswered_questions[most_pertinent_question_id]
     86 this_chain = (
     87     RunnablePassthrough.assign(
     88         original_question_text=lambda x: x["original_question"].question,
   (...)
     96     | get_most_pertinent_question
     97 )
     99 langchain_tool = StructuredTool.from_function(
--> 100     func=lambda original_question, unanswered_questions: this_chain.invoke(
    101         {"original_question": original_question, "unanswered_questions": unanswered_questions}
    102     ),
    103     name=question_prioritizer.name,
    104     description=question_prioritizer.tool.description,
    105     args_schema=QuestionPrioritizerInput,
    106 )
    108 return langchain_tool

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:2879, in RunnableSequence.invoke(self, input, config, **kwargs)
   2877             input = context.run(step.invoke, input, config, **kwargs)
   2878         else:
-> 2879             input = context.run(step.invoke, input, config)
   2880 # finish the root run
   2881 except BaseException as e:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py:495, in RunnableAssign.invoke(self, input, config, **kwargs)
    489 def invoke(
    490     self,
    491     input: Dict[str, Any],
    492     config: Optional[RunnableConfig] = None,
    493     **kwargs: Any,
    494 ) -> Dict[str, Any]:
--> 495     return self._call_with_config(self._invoke, input, config, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:1786, in Runnable._call_with_config(self, func, input, config, run_type, serialized, **kwargs)
   1782     context = copy_context()
   1783     context.run(_set_config_context, child_config)
   1784     output = cast(
   1785         Output,
-> 1786         context.run(
   1787             call_func_with_variable_args,  # type: ignore[arg-type]
   1788             func,  # type: ignore[arg-type]
   1789             input,  # type: ignore[arg-type]
   1790             config,
   1791             run_manager,
   1792             **kwargs,
   1793         ),
   1794     )
   1795 except BaseException as e:
   1796     run_manager.on_chain_error(e)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/config.py:398, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
    396 if run_manager is not None and accepts_run_manager(func):
    397     kwargs["run_manager"] = run_manager
--> 398 return func(input, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py:482, in RunnableAssign._invoke(self, input, run_manager, config, **kwargs)
    469 def _invoke(
    470     self,
    471     input: Dict[str, Any],
   (...)
    474     **kwargs: Any,
    475 ) -> Dict[str, Any]:
    476     assert isinstance(
    477         input, dict
    478     ), "The input to RunnablePassthrough.assign() must be a dict."
    480     return {
    481         **input,
--> 482         **self.mapper.invoke(
    483             input,
    484             patch_config(config, callbacks=run_manager.get_child()),
    485             **kwargs,
    486         ),
    487     }

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:3580, in RunnableParallel.invoke(self, input, config)
   3575     with get_executor_for_config(config) as executor:
   3576         futures = [
   3577             executor.submit(_invoke_step, step, input, config, key)
   3578             for key, step in steps.items()
   3579         ]
-> 3580         output = {key: future.result() for key, future in zip(steps, futures)}
   3581 # finish the root run
   3582 except BaseException as e:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:3580, in <dictcomp>(.0)
   3575     with get_executor_for_config(config) as executor:
   3576         futures = [
   3577             executor.submit(_invoke_step, step, input, config, key)
   3578             for key, step in steps.items()
   3579         ]
-> 3580         output = {key: future.result() for key, future in zip(steps, futures)}
   3581 # finish the root run
   3582 except BaseException as e:

File ~/mambaforge/envs/crew/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
    456     raise CancelledError()
    457 elif self._state == FINISHED:
--> 458     return self.__get_result()
    459 else:
    460     raise TimeoutError()

File ~/mambaforge/envs/crew/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
    401 if self._exception:
    402     try:
--> 403         raise self._exception
    404     finally:
    405         # Break a reference cycle with the exception in self._exception
    406         self = None

File ~/mambaforge/envs/crew/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
     55     return
     57 try:
---> 58     result = self.fn(*self.args, **self.kwargs)
     59 except BaseException as exc:
     60     self.future.set_exception(exc)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:3564, in RunnableParallel.invoke.<locals>._invoke_step(step, input, config, key)
   3562 context = copy_context()
   3563 context.run(_set_config_context, child_config)
-> 3564 return context.run(
   3565     step.invoke,
   3566     input,
   3567     child_config,
   3568 )

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:397, in BaseTool.invoke(self, input, config, **kwargs)
    390 def invoke(
    391     self,
    392     input: Union[str, Dict, ToolCall],
    393     config: Optional[RunnableConfig] = None,
    394     **kwargs: Any,
    395 ) -> Any:
    396     tool_input, kwargs = _prep_run_args(input, config, **kwargs)
--> 397     return self.run(tool_input, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:586, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)
    584 if error_to_raise:
    585     run_manager.on_tool_error(error_to_raise)
--> 586     raise error_to_raise
    587 output = _format_output(content, artifact, tool_call_id, self.name, status)
    588 run_manager.on_tool_end(output, color=color, name=self.name, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/base.py:555, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)
    553 if config_param := _get_runnable_config_param(self._run):
    554     tool_kwargs[config_param] = config
--> 555 response = context.run(self._run, *tool_args, **tool_kwargs)
    556 if self.response_format == "content_and_artifact":
    557     if not isinstance(response, tuple) or len(response) != 2:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/tools/tool.py:116, in MotleyTool._patch_tool_run.<locals>.patched_run(*args, **kwargs)
    113 @functools.wraps(original_run)
    114 def patched_run(*args, **kwargs):
    115     try:
--> 116         result = original_run(*args, **kwargs)
    117         if self.return_direct:
    118             raise DirectOutput(result)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/tools/structured.py:69, in StructuredTool._run(self, config, run_manager, *args, **kwargs)
     67     if config_param := _get_runnable_config_param(self.func):
     68         kwargs[config_param] = config
---> 69     return self.func(*args, **kwargs)
     70 raise NotImplementedError("StructuredTool does not support sync invocation.")

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/motleycrew/tools/llm_tool.py:76, in create_llm_langchain_tool.<locals>.call_llm(**kwargs)
     74 def call_llm(**kwargs) -> str:
     75     chain = prompt | llm
---> 76     return chain.invoke(kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/runnables/base.py:2879, in RunnableSequence.invoke(self, input, config, **kwargs)
   2877             input = context.run(step.invoke, input, config, **kwargs)
   2878         else:
-> 2879             input = context.run(step.invoke, input, config)
   2880 # finish the root run
   2881 except BaseException as e:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:277, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    266 def invoke(
    267     self,
    268     input: LanguageModelInput,
   (...)
    272     **kwargs: Any,
    273 ) -> BaseMessage:
    274     config = ensure_config(config)
    275     return cast(
    276         ChatGeneration,
--> 277         self.generate_prompt(
    278             [self._convert_input(input)],
    279             stop=stop,
    280             callbacks=config.get("callbacks"),
    281             tags=config.get("tags"),
    282             metadata=config.get("metadata"),
    283             run_name=config.get("run_name"),
    284             run_id=config.pop("run_id", None),
    285             **kwargs,
    286         ).generations[0][0],
    287     ).message

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:777, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    769 def generate_prompt(
    770     self,
    771     prompts: List[PromptValue],
   (...)
    774     **kwargs: Any,
    775 ) -> LLMResult:
    776     prompt_messages = [p.to_messages() for p in prompts]
--> 777     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:634, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    632         if run_managers:
    633             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 634         raise e
    635 flattened_outputs = [
    636     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    637     for res in results
    638 ]
    639 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:624, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    621 for i, m in enumerate(messages):
    622     try:
    623         results.append(
--> 624             self._generate_with_cache(
    625                 m,
    626                 stop=stop,
    627                 run_manager=run_managers[i] if run_managers else None,
    628                 **kwargs,
    629             )
    630         )
    631     except BaseException as e:
    632         if run_managers:

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:846, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    844 else:
    845     if inspect.signature(self._generate).parameters.get("run_manager"):
--> 846         result = self._generate(
    847             messages, stop=stop, run_manager=run_manager, **kwargs
    848         )
    849     else:
    850         result = self._generate(messages, stop=stop, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/langchain_openai/chat_models/base.py:686, in BaseChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
    684     generation_info = {"headers": dict(raw_response.headers)}
    685 else:
--> 686     response = self.client.create(**payload)
    687 return self._create_chat_result(response, generation_info)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/openai/_utils/_utils.py:274, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    272             msg = f"Missing required argument: {quote(missing[0])}"
    273     raise TypeError(msg)
--> 274 return func(*args, **kwargs)

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/openai/resources/chat/completions.py:704, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    668 @required_args(["messages", "model"], ["messages", "model", "stream"])
    669 def create(
    670     self,
   (...)
    701     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    702 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
    703     validate_response_format(response_format)
--> 704     return self._post(
    705         "/chat/completions",
    706         body=maybe_transform(
    707             {
    708                 "messages": messages,
    709                 "model": model,
    710                 "frequency_penalty": frequency_penalty,
    711                 "function_call": function_call,
    712                 "functions": functions,
    713                 "logit_bias": logit_bias,
    714                 "logprobs": logprobs,
    715                 "max_completion_tokens": max_completion_tokens,
    716                 "max_tokens": max_tokens,
    717                 "n": n,
    718                 "parallel_tool_calls": parallel_tool_calls,
    719                 "presence_penalty": presence_penalty,
    720                 "response_format": response_format,
    721                 "seed": seed,
    722                 "service_tier": service_tier,
    723                 "stop": stop,
    724                 "stream": stream,
    725                 "stream_options": stream_options,
    726                 "temperature": temperature,
    727                 "tool_choice": tool_choice,
    728                 "tools": tools,
    729                 "top_logprobs": top_logprobs,
    730                 "top_p": top_p,
    731                 "user": user,
    732             },
    733             completion_create_params.CompletionCreateParams,
    734         ),
    735         options=make_request_options(
    736             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    737         ),
    738         cast_to=ChatCompletion,
    739         stream=stream or False,
    740         stream_cls=Stream[ChatCompletionChunk],
    741     )

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/openai/_base_client.py:1270, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1256 def post(
   1257     self,
   1258     path: str,
   (...)
   1265     stream_cls: type[_StreamT] | None = None,
   1266 ) -> ResponseT | _StreamT:
   1267     opts = FinalRequestOptions.construct(
   1268         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1269     )
-> 1270     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/openai/_base_client.py:947, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    944 else:
    945     retries_taken = 0
--> 947 return self._request(
    948     cast_to=cast_to,
    949     options=options,
    950     stream=stream,
    951     stream_cls=stream_cls,
    952     retries_taken=retries_taken,
    953 )

File ~/mambaforge/envs/crew/lib/python3.10/site-packages/openai/_base_client.py:1051, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1048         err.response.read()
   1050     log.debug("Re-raising status error")
-> 1051     raise self._make_status_error_from_response(err.response) from None
   1053 return self._process_response(
   1054     cast_to=cast_to,
   1055     options=options,
   (...)
   1059     retries_taken=retries_taken,
   1060 )

AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxxxx*******************************xxxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

@whimo
Copy link
Contributor

whimo commented Sep 28, 2024

Hi @iSevenDays, thanks for the PR!

We don't auto-guess the LLM provider in init_llm (yet), so you have to specify it directly (see https://motleycrew.readthedocs.io/en/latest/choosing_llms.html#providing-an-llm-to-an-agent), in your case "ollama" or LLMProvider.OLLAMA. By default it's OpenAI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants