Skip to content

Latest commit

 

History

History
175 lines (129 loc) · 6.67 KB

File metadata and controls

175 lines (129 loc) · 6.67 KB

Quick Start

Welcome to the exciting world of aiFlows! 🚀

This tutorial will guide you through your first inference runs with different Flows from the FlowVerse for the task of question answering (QA) as an example. In the process, you'll get familiar with the key aspects of the library and experience how, thanks to the modular abstraction and FlowVerse, we can trivially switch between very different pre-implemented question-answering Flows!

The guide is organized in two sections:

  1. Section 1: Running your first QA Flow using a Flow from the FlowVerse 🥳
  2. Section 2: FlowVerse Playground Notebook

Section 1: Running your First QA Flow using a Flow from the FlowVerse

By the Tutorial's End, I Will Have...

  • Learned how to pull Flows from the FlowVerse
  • Run my first Flow
  • Understood how to pass my API information to a Flow

While, we support many more API providers (including custom ones), for the sake of simplicity, in this tutorial, we will use OpenAI and Azure.

Step 1: Pull a Flow From the FlowVerse

Explore a diverse array of Flows on the FlowVerse here. In this demonstration, we'll illustrate how to use a Flow from the FlowVerse, focusing on the ChatAtomicFlow within the ChatFlowModule. This versatile Flow utilizes a language model (LLM) via an API to generate textual responses for given textual inputs. It's worth noting the same process described here applies to any available Flow in the FlowVerse (implemented by any member of the community).

Without further ado, let's dive in!

Concretely, you would use the sync_dependencies function to pull the flow definition and its code from the FlowVerse:

from aiflows import flow_verse
dependencies = [
{"url": "aiflows/ChatFlowModule", "revision": "main"}
]

flow_verse.sync_dependencies(dependencies)

External Library Dependencies

Each Flow on the FlowVerse should include a pip_requirements.txt file for external library dependencies (if it doesn't have any, the file should be empty). You can check its dependencies on the FlowVerse. In general, if there are any, you need to make sure to install them.

As you can see here, the ChatFlowModule doesn't have any external dependencies, so we're all set.

Step 3: Run the Flow!

After executing sync_dependencies, the code implementation of ChatFlowModule has been pulled into the local repository. We can now just import it:

from flow_modules.aiflows.ChatFlowModule import ChatAtomicFlow

Set your API information (copy-paste it):

#OpenAI backend
api_key = "" # copy paste your api key here
api_information = [ApiInfo(backend_used="openai", api_key=api_key)]

# Azure backend
# api_key = "" # copy paste your api key here
# api_base = "" # copy paste your api base here
# api_version = "" #copypase your api base here
# api_information = ApiInfo(backend_used = "azure",
#                           api_base =api_base,
#                           api_key = api_version,
#                           api_version =  api_version )

Each flow from the FlowVerse should have a demo.yaml file, which is a demo configuration of how to instantiate the flow.

Load the demo.yaml configuration:

from aiflows.utils.general_helpers import read_yaml_file
# get demo configuration
cfg = read_yaml_file("flow_modules/aiflows/ChatFlowModule/demo.yaml")

An attentive reader might have noticed that the field flow.backend.api_infos in demo.yaml is set to "???" (see a snippet here below).

flow:  # Overrides the ChatAtomicFlow config
  _target_: flow_modules.aiflows.ChatFlowModule.ChatAtomicFlow.instantiate_from_default_config

  name: "SimpleQA_Flow"
  description: "A flow that answers questions."

  # ~~~ Input interface specification ~~~
  input_interface_non_initialized:
    - "question"

  # ~~~ backend model parameters ~~
  backend:
    _target_: aiflows.backends.llm_lite.LiteLLMBackend
    api_infos: ???

The following overwrites the field with your personal API information:

# recursively find the 'api_infos' entry and put the API information in the config
api_information = [ApiInfo(backend_used="openai",
                              api_key = os.getenv("OPENAI_API_KEY"))]

quick_load_api_keys(cfg, api_information, key="api_infos")

Starg a colink server, serve the Flow and get an instance:

cl = start_colink_server()
#3. ~~~~ Serve The Flow ~~~~
serving.serve_flow(
    cl = cl,
    flow_class_name="flow_modules.aiflows.ChatFlowModule.ChatAtomicFlow",
    flow_endpoint="ChatAtomicFlow",
)

#4. ~~~~~Start A Worker Thread~~~~~
run_dispatch_worker_thread(cl)

#5. ~~~~~Mount the flow and get an instance of it via a proxy~~~~~~
proxy_flow= serving.get_flow_instance(
    cl=cl,
    flow_endpoint="ChatAtomicFlow",
    user_id="local",
    config_overrides = cfg
)

Run your Flow:

# ~~~ Get the data ~~~
data = {"id": 0, "question": "What is the capital of France?"}

input_message = proxy_flow.package_input_message(
        data=data
)
#7. ~~~ Run inference ~~~
future = proxy_flow.get_reply_future(input_message)

#uncomment this line if you would like to get the full message back
#reply_message = future.get_message()
reply_data = future.get_data()

# ~~~ Print the output ~~~
print("~~~~~~Reply~~~~~~")
print(reply_data)

Congratulations! You've successfully run your first question-answering Flow!


You can find this example in runChatAtomicFlow.py

To run it, use the following commands in your terminal (make sure to copy-paste your keys first):

cd examples/quick_start/
python runChatAtomicFlow.py

Upon execution, the result should appear as follows:

[{'api_output': 'The capital of France is Paris.'}]

Section 2: FlowVerse Playground Notebook

Want to quickly run some Flows from FlowVerse? Check out our jupyter notebook flow_verse_playground.ipynb where you can quicky switch between the following flows from the FlowVerse: