diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..a304844 --- /dev/null +++ b/404.html @@ -0,0 +1,1198 @@ + + + +
+ + + + + + + + + + + + + + +How to use LTM memory.
+ + + + + + + + + + + + + +Advanced Node Concepts: Decider & Loop
+ + + + + + + + + + + + + +How to let the agent use tools.
+ + + + + + + + + + + + + +This example demonstrates how to use the framework for outfit recommendation tasks with loop functionality. The example code can be found in the examples/step3_outfit_with_loop
directory.
cd examples/step3_outfit_with_loop
+
+This example implements an interactive outfit recommendation workflow that uses a loop-based approach to refine recommendations based on user feedback. The workflow consists of the following key components:
+Serves as the starting point for the recommendation process
+Interactive QA Loop with Weather Integration
+Loop terminates when OutfitDecider returns decision=true
+Final Recommendation
+OutfitRecommendation: Generates the final outfit suggestions based on:
+Workflow Flow
+ Start -> Image Input -> OutfitQA Loop (QA + Weather Search + Decision) -> Final Recommendation -> End
The workflow leverages Redis for state management and the Conductor server for workflow orchestration. This architecture enables: +- Image-based outfit recommendations +- Weather-aware outfit suggestions using real-time data +- Interactive refinement through structured Q&A +- Context-aware suggestions incorporating multiple factors +- Persistent state management across the workflow
+The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step3_outfit_with_loop
.
Configure your LLM settings in configs/llms/gpt.yml
and configs/llms/text_res.yml
:
bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+Configure your Bing Search API key in configs/tools/websearch.yml
:
Set your Bing API key through environment variable or by directly modifying the yml file
+ bash
+ export bing_api_key="your_bing_api_key"
Update settings in the generated container.yaml
:
redis_stream_client
and redis_stm_client
sectionsFor terminal/CLI usage:
+ bash
+ python run_cli.py
For app/GUI usage:
+ bash
+ python run_app.py
If you encounter issues: +- Verify Redis is running and accessible +- Check your OpenAI API key and Bing API key are valid +- Ensure all dependencies are installed correctly +- Review logs for any error messages +- Confirm Conductor server is running and accessible +- Check Redis Stream client and Redis STM client configuration
+Coming soon! This section will provide detailed instructions for building the step3_outfit_with_loop example step by step.
+ + + + + + + + + + + + + +This example demonstrates how to use the framework for outfit recommendation tasks with long-term memory functionality. The example code can be found in the examples/step4_outfit_with_ltm
directory.
cd examples/step4_outfit_with_ltm
+
+This example implements an outfit recommendation system with long-term memory capabilities through two main workflows:
+Workflow sequence: Image Listening -> Preprocessing -> LTM Storage
+Outfit Recommendation Workflow
+The system leverages both short-term memory (Redis STM) and long-term memory (Milvus LTM) for: +- Efficient image storage and retrieval +- Persistent clothing item database +- Context-aware outfit recommendations +- Interactive preference refinement +- Stateful conversation management
+Image Storage: Listen -> Preprocess -> Store in LTM
+ Recommendation: QA Loop (QA + Decision) -> Generation -> Conclusion
The system uses Redis for state management, Milvus for long-term image storage, and Conductor for workflow orchestration. This architecture enables: +- Scalable image database management +- Intelligent outfit recommendations based on stored items +- Interactive preference gathering +- Persistent clothing knowledge base +- Efficient retrieval of relevant items
+git lfs intall
, then pull sample images by git lfs pull
The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, Milvus connections and other service configurations. To set up your configuration:
+# For outfit recommendation workflow
+ python outfit_from_storage/compile_container.py
+ ``
+ This will create two container.yaml files with default settings under
image_storageand
outfit_from_storagedirectories:
+ -
image_storage/container.yaml: Configuration for the image storage workflow
+ -
outfit_from_storage/container.yaml`: Configuration for the outfit recommendation workflow
configs/llms/gpt.yml
and configs/llms/text_res.yml
in the two workflow directories:bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+Configure your Bing Search API key in configs/tools/websearch.yml
in the two workflow directories:
bash
+ export bing_api_key="your_bing_api_key"
configs/llms/text_encoder.yml
in the two workflow directories:bash
+ export custom_openai_text_encoder_key="openai_text_encoder_key"
+ export custom_openai_text_encoder_endpoint="your_openai_endpoint"
MilvusLTM
in container.yaml
Adjust the embedding dimension and other settings as needed through environment variable or by directly modifying the yml file
+Update settings in the generated container.yaml
:
redis_stream_client
and redis_stm_client
sectionscomponents
section:storage_name
and dim
for MilvusLTMFor terminal/CLI usage:
+ bash
+ python image_storage/run_image_storage_cli.py
+ For app usage:
+ bash
+ python image_storage/run_image_storage_app.py
This workflow will store outfit images in the Milvus database.
+For terminal/CLI usage:
+ bash
+ python outfit_from_storage/run_outfit_recommendation_cli.py
For app/GUI usage:
+ bash
+ python outfit_from_storage/run_outfit_recommendation_app.py
This workflow will retrieve outfit recommendations from the stored images.
+If you encounter issues: +- Verify Redis is running and accessible +- Check your OpenAI API key and Bing API key are valid +- Ensure all dependencies are installed correctly +- Review logs for any error messages +- Confirm Conductor server is running and accessible +- Check Redis Stream client and Redis STM client configuration
+Coming soon! This section will provide detailed instructions for building the step4_outfit_with_ltm example step by step.
+ + + + + + + + + + + + + +This example demonstrates how to use the framework for outfit recommendation tasks with switch_case functionality. The example code can be found in the examples/step2_outfit_with_switch
directory.
cd examples/step2_outfit_with_switch
+
+This example implements an outfit recommendation workflow that uses switch-case functionality to conditionally include weather information in the recommendation process. The workflow consists of the following key components:
+Extracts the user's outfit request instructions
+Weather Decision Logic
+Controls whether weather data should be fetched
+Conditional Weather Search
+Integrates weather data into the recommendation context
+Outfit Recommendation
+The workflow follows this sequence:
+The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step2_outfit_with_switch
.
Configure your LLM settings in configs/llms/gpt.yml
and configs/llms/text_res.yml
:
Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+Configure your Bing Search API key in configs/tools/websearch.yml
:
Set your Bing API key through environment variable or by directly modifying the yml file
+ bash
+ export bing_api_key="your_bing_api_key"
Update settings in the generated container.yaml
:
redis_stream_client
and redis_stm_client
sectionsFor terminal/CLI usage:
+ bash
+ python run_cli.py
For app/GUI usage:
+ bash
+ python run_app.py
If you encounter issues:
+Check Redis Stream client and Redis STM client configuration
+Ensure all dependencies are installed correctly
+Coming soon! This section will provide detailed instructions for building the step2_outfit_with_switch example step by step.
+ + + + + + + + + + + + + +This example demonstrates how to use the framework for visual question answering (VQA) tasks. The example code can be found in the examples/step1_simpleVQA
directory.
cd examples/step1_simpleVQA
+
+This example implements a simple Visual Question Answering (VQA) workflow that consists of two main components:
+Extracts the user's questions/instructions
+Simple VQA Processing
+The workflow follows a straightforward sequence:
+The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step1_simpleVQA
.
Configure your LLM settings in configs/llms/gpt.yml
:
bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+Update settings in the generated container.yaml
:
redis_stream_client
and redis_stm_client
sectionsFor terminal/CLI usage:
+ bash
+ python run_cli.py
For app/GUI usage:
+ bash
+ python run_app.py
If you encounter issues: +- Verify Redis is running and accessible +- Check your OpenAI API key is valid +- Ensure all dependencies are installed correctly +- Review logs for any error messages
+Coming soon! This section will provide detailed instructions for building and packaging the step1_simpleVQA example step by step.
+ + + + + + + + + + + + + +