This example demonstrates how to use the framework for outfit recommendation tasks with loop functionality. The example code can be found in the examples/step3_outfit_with_loop directory.
This example implements an interactive outfit recommendation workflow that uses a loop-based approach to refine recommendations based on user feedback. The workflow consists of the following key components:
+
+
Initial Image Input
+
OutfitImageInput: Handles the upload and processing of the initial clothing item image
+
+
Serves as the starting point for the recommendation process
+
+
+
Interactive QA Loop with Weather Integration
+
+
OutfitQA: Conducts an interactive Q&A session to gather context and preferences
+
Uses web search tool to fetch real-time weather data for the specified location
+
OutfitDecider: Evaluates if sufficient information has been collected based on:
+
User preferences
+
Current weather conditions
+
+
+
Uses DoWhileTask to continue the loop until adequate information is gathered
+
+
Loop terminates when OutfitDecider returns decision=true
+
+
+
Final Recommendation
+
+
+
OutfitRecommendation: Generates the final outfit suggestions based on:
The workflow leverages Redis for state management and the Conductor server for workflow orchestration. This architecture enables:
+- Image-based outfit recommendations
+- Weather-aware outfit suggestions using real-time data
+- Interactive refinement through structured Q&A
+- Context-aware suggestions incorporating multiple factors
+- Persistent state management across the workflow
Required packages installed (see requirements.txt)
+
Access to OpenAI API or compatible endpoint
+
Access to Bing API key for web search functionality to search real-time weather information for outfit recommendations (see configs/tools/websearch.yml)
The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+
+
+
Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step3_outfit_with_loop.
+
+
+
Configure your LLM settings in configs/llms/gpt.yml and configs/llms/text_res.yml:
+
+
Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
+
+
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+
+
+
Configure your Bing Search API key in configs/tools/websearch.yml:
+
+
+
Set your Bing API key through environment variable or by directly modifying the yml file
+ bash
+ export bing_api_key="your_bing_api_key"
+
+
+
Update settings in the generated container.yaml:
+
+
Modify Redis connection settings:
+
Set the host, port and credentials for your Redis instance
+
Configure both redis_stream_client and redis_stm_client sections
+
+
+
Update the Conductor server URL under conductor_config section
If you encounter issues:
+- Verify Redis is running and accessible
+- Check your OpenAI API key and Bing API key are valid
+- Ensure all dependencies are installed correctly
+- Review logs for any error messages
+- Confirm Conductor server is running and accessible
+- Check Redis Stream client and Redis STM client configuration
Outfit Recommendation with Long-Term Memory Example¶
+
This example demonstrates how to use the framework for outfit recommendation tasks with long-term memory functionality. The example code can be found in the examples/step4_outfit_with_ltm directory.
The system uses Redis for state management, Milvus for long-term image storage, and Conductor for workflow orchestration. This architecture enables:
+- Scalable image database management
+- Intelligent outfit recommendations based on stored items
+- Interactive preference gathering
+- Persistent clothing knowledge base
+- Efficient retrieval of relevant items
Required packages installed (see requirements.txt)
+
Access to OpenAI API or compatible endpoint (see configs/llms/gpt.yml)
+
Access to Bing API key for web search functionality to search real-time weather information for outfit recommendations (see configs/tools/websearch.yml)
+
Redis server running locally or remotely
+
Conductor server running locally or remotely
+
Milvus vector database (will be started automatically when workflow runs)
+
Sufficient storage space for image database
+
Install Git LFS by git lfs intall, then pull sample images by git lfs pull
The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, Milvus connections and other service configurations. To set up your configuration:
+
+
Generate the container.yaml files:
+ ```bash
+ # For image storage workflow
+ python image_storage/compile_container.py
+
+
# For outfit recommendation workflow
+ python outfit_from_storage/compile_container.py
+ ``
+ This will create two container.yaml files with default settings underimage_storageandoutfit_from_storagedirectories:
+ -image_storage/container.yaml: Configuration for the image storage workflow
+ -outfit_from_storage/container.yaml`: Configuration for the outfit recommendation workflow
+
+
Configure your LLM settings in configs/llms/gpt.yml and configs/llms/text_res.yml in the two workflow directories:
+
Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
+
+
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+
+
+
Configure your Bing Search API key in configs/tools/websearch.yml in the two workflow directories:
+
+
Set your Bing API key through environment variable or by directly modifying the yml file
+ bash
+ export bing_api_key="your_bing_api_key"
+
Configure your text encoder settings in configs/llms/text_encoder.yml in the two workflow directories:
+
Set your OpenAI text encoder endpoint and API key through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_text_encoder_key="openai_text_encoder_key"
+ export custom_openai_text_encoder_endpoint="your_openai_endpoint"
+
The default text encoder configuration uses OpenAI text embedding v3 with 3072 dimensions, make sure you change the dim value of MilvusLTM in container.yaml
+
+
Adjust the embedding dimension and other settings as needed through environment variable or by directly modifying the yml file
+
+
+
Update settings in the generated container.yaml:
+
+
Modify Redis connection settings:
+
Set the host, port and credentials for your Redis instance
+
Configure both redis_stream_client and redis_stm_client sections
+
+
+
Update the Conductor server URL under conductor_config section
If you encounter issues:
+- Verify Redis is running and accessible
+- Check your OpenAI API key and Bing API key are valid
+- Ensure all dependencies are installed correctly
+- Review logs for any error messages
+- Confirm Conductor server is running and accessible
+- Check Redis Stream client and Redis STM client configuration
This example demonstrates how to use the framework for outfit recommendation tasks with switch_case functionality. The example code can be found in the examples/step2_outfit_with_switch directory.
This example implements an outfit recommendation workflow that uses switch-case functionality to conditionally include weather information in the recommendation process. The workflow consists of the following key components:
+
+
Input Interface
+
Handles user input containing clothing requests and image data
+
Processes and caches any uploaded images
+
+
Extracts the user's outfit request instructions
+
+
+
Weather Decision Logic
+
+
WeatherDecider: Analyzes the user's request to determine if weather information is needed
+
Makes a binary decision (0 or 1) based on context in the user's request
+
+
Controls whether weather data should be fetched
+
+
+
Conditional Weather Search
+
+
WeatherSearcher: Only executes if WeatherDecider returns 0 (weather info needed)
+
Uses web search functionality to fetch current weather conditions
+
+
Integrates weather data into the recommendation context
Required packages installed (see requirements.txt)
+
Access to OpenAI API or compatible endpoint (see configs/llms/gpt.yml)
+
Access to Bing API key for web search functionality to search real-time weather information for outfit recommendations (see configs/tools/websearch.yml)
The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+
+
+
Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step2_outfit_with_switch.
+
+
+
Configure your LLM settings in configs/llms/gpt.yml and configs/llms/text_res.yml:
+
+
+
Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
+
+
+
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+
+
+
Configure your Bing Search API key in configs/tools/websearch.yml:
+
+
+
Set your Bing API key through environment variable or by directly modifying the yml file
+ bash
+ export bing_api_key="your_bing_api_key"
+
+
+
Update settings in the generated container.yaml:
+
+
Modify Redis connection settings:
+
Set the host, port and credentials for your Redis instance
+
Configure both redis_stream_client and redis_stm_client sections
+
+
+
Update the Conductor server URL under conductor_config section
This example demonstrates how to use the framework for visual question answering (VQA) tasks. The example code can be found in the examples/step1_simpleVQA directory.
The container.yaml file is a configuration file that manages dependencies and settings for different components of the system, including Conductor connections, Redis connections, and other service configurations. To set up your configuration:
+
+
+
Generate the container.yaml file:
+ bash
+ python compile_container.py
+ This will create a container.yaml file with default settings under examples/step1_simpleVQA.
+
+
+
Configure your LLM settings in configs/llms/gpt.yml:
+
+
Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
+ bash
+ export custom_openai_key="your_openai_api_key"
+ export custom_openai_endpoint="your_openai_endpoint"
+
+
Configure other model settings like temperature as needed through environment variable or by directly modifying the yml file
+
+
+
Update settings in the generated container.yaml:
+
+
Modify Redis connection settings:
+
Set the host, port and credentials for your Redis instance
+
Configure both redis_stream_client and redis_stm_client sections
+
+
+
Update the Conductor server URL under conductor_config section
If you encounter issues:
+- Verify Redis is running and accessible
+- Check your OpenAI API key is valid
+- Ensure all dependencies are installed correctly
+- Review logs for any error messages