diff --git a/genai-smart-apps/1-overview-and-highlights/overview-and-highlights.md b/genai-smart-apps/1-overview-and-highlights/overview-and-highlights.md new file mode 100644 index 000000000..e11df71c6 --- /dev/null +++ b/genai-smart-apps/1-overview-and-highlights/overview-and-highlights.md @@ -0,0 +1,33 @@ +# Introduction + +In today's digital age, companies are leveraging advanced technologies like Generative AI to enhance their business operations and customer interactions. Whether it's automating customer service, generating insights from data, or creating engaging content, Generative AI is transforming the way businesses operate. + +**Workshop Description:** In this session, we will explore how Oracle Cloud Infrastructure Generative AI can be embedded into your applications to solve real business challenges. From document summarization to natural language conversations grounded in documents, website content creation, and transcriptions from videos, OCI Generative AI offers a versatile set of tools to improve productivity and efficiency. Discover how to leverage OCI generative ai and open frameworks to build smarter, more efficient applications. + +## About this Workshop + +In this workshop, you will learn how to utilize OCI Generative AI to enhance your applications and solve practical business problems. + +**Estimated Workshop Time:** 45 minutes + +### Objectives + +In this workshop, you will learn how to: + +* Integrate OCI Generative AI into your applications. +* Utilize document summarization capabilities to condense large volumes of information. +* Implement natural language processing for creating conversational interfaces. +* Generate and manage website content using AI. +* Convert youtube video content into text through transcription. +* Leverage open frameworks and OCI services to boost application efficiency. + +## Learn More + +* [OCI Generative AI overview](https://www.oracle.com/artificial-intelligence/generative-ai/) +* [OCI Generative AI service documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 diff --git a/genai-smart-apps/2-setup/images/addApiKey.png b/genai-smart-apps/2-setup/images/addApiKey.png new file mode 100644 index 000000000..1173ab2b5 Binary files /dev/null and b/genai-smart-apps/2-setup/images/addApiKey.png differ diff --git a/genai-smart-apps/2-setup/images/configFile.png b/genai-smart-apps/2-setup/images/configFile.png new file mode 100644 index 000000000..1d60b429f Binary files /dev/null and b/genai-smart-apps/2-setup/images/configFile.png differ diff --git a/genai-smart-apps/2-setup/images/configPreview.png b/genai-smart-apps/2-setup/images/configPreview.png new file mode 100644 index 000000000..c76381bb5 Binary files /dev/null and b/genai-smart-apps/2-setup/images/configPreview.png differ diff --git a/genai-smart-apps/2-setup/images/downloadKey.png b/genai-smart-apps/2-setup/images/downloadKey.png new file mode 100644 index 000000000..f856aa9e4 Binary files /dev/null and b/genai-smart-apps/2-setup/images/downloadKey.png differ diff --git a/genai-smart-apps/2-setup/images/userProfile.png b/genai-smart-apps/2-setup/images/userProfile.png new file mode 100644 index 000000000..75792081d Binary files /dev/null and b/genai-smart-apps/2-setup/images/userProfile.png differ diff --git a/genai-smart-apps/2-setup/setup.md b/genai-smart-apps/2-setup/setup.md new file mode 100644 index 000000000..19833aafc --- /dev/null +++ b/genai-smart-apps/2-setup/setup.md @@ -0,0 +1,154 @@ +# Setup OCI-cli + +## Introduction + +In this lab, we will go over the steps to install and configure OCI-cli, a command-line interface tool for managing Oracle Cloud resources. By the end of this guide, you should be able to access and manage your Oracle Cloud tenant from your command line. + +Estimated Time: 15 minutes + +## Objectives + +By the end of this lab, you will have: + +- Installed OCI-cli on your local machine +- Created an API key for your user +- Configured the OCI-cli with your API credentials +- Tested the CLI connection to your Oracle Cloud tenant + +## Prerequisites + +- An Oracle Cloud account +- Administrative access to the tenant + +## Task 1: Install OCI-cli + +The installation process for OCI-cli varies depending on your operating system. Below are the instructions for macOS and Linux: + +### macOS + +Open your terminal and run the following command: + +``` + +brew install oci-cli + +``` + +### Linux + +Open your terminal and run the bash command with the script from the OCI-cli GitHub repository: + +``` + +bash -c "$(curl -L )" + +``` + +For other operating systems or more detailed installation instructions, refer to the official documentation: [OCI CLI Quickstart](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm). + +## Task 2: Create an API Key + +To authenticate with OCI-cli, you need to create an API key for your user. Follow these steps: + +1. Open the OCI console and click on your profile icon in the top-right corner. + +2. Click on your user ID to open the user profile page. + +![Open User Profile Page](./images/userProfile.png) + +3. In the user profile, navigate to "User Settings" and select "API Keys". + +4. Click on the "Add API Key" button. + +![Add API key](./images/addApiKey.png) + +5. A popup will appear. Download both the private and public key, and then click "Add". + +![Download Key](./images/downloadKey.png) + +6. A window will display the API key details. Copy the contents of the text box and click "Close". + +![Config File](./images/configFile.png) + +## Task 3: Configure OCI-cli + +Now, we will set up the configuration file for OCI-cli: + +1. Create a hidden directory (if it doesn't exist): + + ``` + + mkdir ~/.oci + <\copy> + ``` + +2. Create a config file inside the directory: + + ``` + + touch ~/.oci/config + + ``` + +3. Move the downloaded SSH keys to the `.oci` directory and rename them: + + ``` + + mv ssh-key-2022-08-16.key ~/.oci/ssh-key.key + mv ssh-key-2020-08-16.key.pub ~/.oci/ssh-key.key.pub + + ``` + +4. Open the config file and paste the content you copied from the API key details: + + ``` + + vi ~/.oci/config + + ``` + +5. Modify the last field of the config file using the absolute path of the private key. + +Your config file should look similar to the image below: + +![Config File Preview](./images/configPreview.png) + +## Task 4: Test the Connection + +To verify that OCI-cli is set up correctly, run the following command to list the regions of your Oracle Cloud tenant: + +``` + +oci iam region list + +``` + +If the command executes successfully and displays a list of regions, your OCI-cli is configured correctly. + +## Download files + +download the codes from [here](https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/ocw/o/OraclecloudWorld.zip) and unzip it. + +Run the below command to install all the dependencies + ``` + +pip install -r requirements.txt +<\copy> + ``` +Once the dependencies are loaded run the below command to launch the app + ``` + +streamlit run ociChat.py +<\copy> + ``` +## Learn More + +- [OCI CLI Installation](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm) +- [Managing API Keys]() + + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/3-oci-genai-chat-application/oci-genai-chat-application.md b/genai-smart-apps/3-oci-genai-chat-application/oci-genai-chat-application.md new file mode 100644 index 000000000..db5ed36cb --- /dev/null +++ b/genai-smart-apps/3-oci-genai-chat-application/oci-genai-chat-application.md @@ -0,0 +1,304 @@ +# **Oracle Gen AI Chat Application** + +## Introduction + +This lab will guide you through the process of setting up and running the Oracle Gen AI Chat Application. We will cover the requirements, installation, and basic usage of the application. By the end of this lab, you should be able to interact with the chat interface and explore its features. + +Estimated Time: 20 minutes + +## Objectives + +In this lab, you will: + +- Understand the requirements for running the Oracle Gen AI Chat Application. +- Install the necessary Python packages and create the required configuration file. +- Run the Streamlit app and interact with the chat interface. +- Explore the available commands and functionality of the application. + +## Prerequisites + +Before beginning this lab, ensure you have the following: + +- Python 3.7 or higher installed on your system. +- Access to an Oracle Cloud Infrastructure (OCI) account with permissions to use the Generative AI service. +- Basic knowledge of Python programming and Streamlit framework. + +## Task 1: Understanding the Application + +The Oracle Gen AI Chat Application is a Streamlit-based chat interface that leverages Oracle's Generative AI technology. It provides a user-friendly way to interact with a pre-trained language model and perform various tasks. The application supports basic commands for managing conversation history and continuing prompts. + +## Task 2: Installation and Setup + +### Requirements + +To run the Oracle Gen AI Chat Application, you need the following: + +- Python 3.7 or higher +- Streamlit: A popular framework for building data apps +- LangChain: A library for building applications with Large Language Models (LLMs) +- Access to OCI and Generative AI models + +### Installation + +1. **Install Python packages:** + + ``` + + pip install streamlit langchain langchain_community + <\copy> + ``` + +2. **Create a configuration file:** Create a `config.py` file in the `pages/utils` directory with the following content: + + ``` + + ENDPOINT = "https://inference.generativeai.eu-frankfurt-1.oci.oraclecloud.com" # Replace with your service endpoint + EMBEDDING_MODEL = "cohere.command" # Replace with your embedding model ID + GENERATE_MODEL = "cohere.command-r-plus" # Replace with your generative model ID + COMPARTMENT_ID = "ocid1.compartment.oc1..example" # Replace with your compartment OCID + <\copy> + ``` + +## Task 3: Running the Application + +### Step 1: Running the Streamlit App + +To run the application, use the following command: + +``` + +streamlit run app.py +<\copy> +``` + +### Step 2: Accessing the Application + +Open your web browser and navigate to `http://localhost:8501` to access the Oracle Gen AI Chat Application. + +## Task 4: Exploring the Application + +### Main Page Setup + +The application sets up a simple interface with a header, sidebar, and a chat input box. The sidebar contains a success message and a style configuration. + +### Language Model Initialization + +The application initializes a `ChatOCIGenAI` language model from the `langchain_community` package using the provided configuration settings. This model is responsible for generating responses based on user input. + +### Conversation Chain + +A `ConversationChain` is created to manage the interaction with the LLM. This chain utilizes `ConversationSummaryMemory` to store and summarize the conversation history. + +### Utility Functions + +#### `timeit(func)` + +This is a decorator function used to measure the execution time of other functions. + +#### `prompting_llm(prompt, _chain)` + +This function sends a prompt to the LLM and retrieves the response. It also prints the prompt and response for debugging purposes and displays a spinner while waiting for the LLM to respond. + +#### `commands(prompt, last_prompt, last_response)` + +This function handles specific commands such as `/continue`, `/history`, `/repeat`, and `/help`. It processes the prompt and generates an appropriate response based on the command. + +### Chat History Management + +The application manages chat history using `st.session_state` to store messages, conversation history, last response, and last prompt. This ensures that the chat history persists across different sessions. + +### User Interaction + +The application reacts to user input from the chat interface. If the input starts with a slash `/`, it is treated as a command and processed using the `commands` function. Otherwise, the prompt is sent to the LLM using the `prompting_llm` function. The user's input and the assistant's response are displayed in the chat interface and added to the chat history. + +### Available Commands + +- **/continue**: Continues the last response based on the previous prompt and response. +- **/history**: Displays the current conversation history summary. +- **/repeat**: Repeats the last response from the assistant. +- **/help**: Provides a list of available commands. + +## Task 5: Code Explanation + +### Imports and Configuration + +The application begins by importing the necessary libraries and setting up configuration parameters. + +``` python +import streamlit as st +import langchain +from langchain.chains import ConversationChain +from langchain.chains.conversation.memory import ConversationSummaryMemory +from typing import Optional, List, Mapping, Any +from io import StringIO +import datetime +import functools +from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI +import pages.utils.config as config +from pages.utils.style import set_page_config + +set_page_config() + +endpoint = config.ENDPOINT +embeddingModel = config.EMBEDDING_MODEL +generateModel = config.GENERATE_MODEL +compartment_id = config.COMPARTMENT_ID +``` + +### Timer Decorator + +A decorator function, `timeit`, is defined to measure the execution time of other functions. + +```python +def timeit(func): + @functools.wraps(func) + def new_func(*args, **kwargs): + start_time = datetime.datetime.now() + result = func(*args, **kwargs) + elapsed_time = datetime.datetime->now() - start_time + print('function [{}] finished in {} ms'.format(func.__name__, str(elapsed_time))) + return result + return new_func +``` + +### Streamlit Page Setup + +The main page is configured with a header, sidebar, and chat input interface. + +```python +st.header("How can I help you today? ") +st.info('Select a page on the side menu or use the chat below.', icon="📄") +with st.sidebar.success("Choose a page above"): + st.sidebar.markdown( + """ + + """, + unsafe_allow_html=True, + ) +``` + +### Instantiate Chat LLM and Conversation Chain + +The language model (LLM) is instantiated using Oracle Gen AI, and a conversation chain is set up to handle user interactions. + +```python +llm = ChatOCIGenAI( + model_id=generateModel, + service_endpoint=endpoint, + compartment_id=compartment_id, + model_kwargs={"temperature": 0.0, "max_tokens": 500}, +) +chain = ConversationChain(llm=llm, memory=ConversationSummaryMemory(llm=llm, max_token_limit=500), verbose=False) +``` + +### Prompting Function + +The `prompting_llm` function handles LLM prompting and utilizes the `timeit` decorator to measure execution time. + +```python +@timeit +def prompting_llm(prompt, _chain): + with st.spinner(text="Prompting LLM..."): + print('\n# ' + datetime.datetime.now().astimezone().isoformat() + ' =====================================================') + print("Prompt: " + prompt + "\n") + response = _chain.invoke(prompt).get("response") + print("-------------------\nResponse: " + response + "\n") + return response +``` + +### Command Handling + +The `commands` function processes special commands to extend the chat functionality. + +```python +@timeit +def commands(prompt, last_prompt, last_response): + command = prompt.split(" ")[0] + if command == "/continue": + prompt = "Given this question: " + last_prompt + ", continue the following text you already started: " + last_response.rsplit("\n\n", 3)[0] + response = prompting_llm(prompt, chain) + return response + elif command == "/history": + try: + history = chain.memory.load_memory_variables({"history"}).get("history") + if history == "": + return "No history to display" + else: + return "Current History Summary: \n" + history + except: + return "The history was cleared" + elif command == "/repeat": + return last_response + elif command == "/help": + return "Command list available: /continue, /history, /repeat, /help" +``` + +### Chat History Initialization + +The application initializes chat history to manage context across sessions. + +```python +if "messages" not in st.session_state: + st.session_state.messages = [] +if "history" not in st.session_state: + st.session_state.history = [] +else: + chain.memory = st.session_state.history +if "last_response" not in st.session_state: + st.session_state.last_response = "" +last_response = "" +else: + last_response = st.session_state.last_response +if "last_prompt" not in st.session_state: + st.session_state.last_prompt = "" +last_prompt = "" +else: + last_prompt = st.session_state.last_prompt +for message in st.session_state.messages: + with st.chat_message(message["role"]): + st.markdown(message["content"]) + st.divider() +``` + +### User Input and Response Handling + +The application reacts to user input, either processing commands or interacting with the LLM. + +```python +if prompt := st.chat_input("What is up?"): + st.chat_message("user").markdown(prompt) + st.session_state.messages.append({"role": "user", "content": prompt}) + if prompt.startswith("/"): + response = commands(prompt, last_prompt, last_response) + with st.chat_message("assistant", avatar="🔮"): + st.markdown(response) + else: + response = prompting_llm(prompt, chain) + with st.chat_message("assistant"): + st.markdown(response) + st.session_state.messages.append({"role": "assistant", "content": response}) + try: + st.session_state.history = chain.memory + st.session_state.last_prompt = prompt + st.session_state.last_response = response + except: + pass +``` + +## Learn More + +To learn more about the Oracle Gen AI Chat Application and its capabilities, you can explore the following resources: + +- [Oracle Cloud Infrastructure Documentation](https://docs.oracle.com/en-us/iaas/Content/Home.htm) +- [Streamlit Documentation](https://docs.streamlit.io/) +- [LangChain Documentation](https://langchain.readthedocs.io/) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/4-chat-with-pdf/chat-with-pdf.md b/genai-smart-apps/4-chat-with-pdf/chat-with-pdf.md new file mode 100644 index 000000000..8e5567f8d --- /dev/null +++ b/genai-smart-apps/4-chat-with-pdf/chat-with-pdf.md @@ -0,0 +1,176 @@ +# Documentation for OCI Gen Ai PDF Chat Application + +## Overview + +This documentation offers a comprehensive explanation of the Streamlit application, which empowers users to engage with their PDF files via a chat interface. This functionality is facilitated by LangChain and Oracle Cloud AI services. The application extracts text from PDFs, processes it into manageable chunks, generates a vector store for efficient retrieval, and establishes a conversational chain to address user queries. + +## Features + +- Upload and process multiple PDF files +- Extract text from uploaded PDFs +- Chunk extracted text for streamlined processing +- Create a vector store using either OracleDB or Qdrant +- Set up a conversational retrieval chain leveraging Oracle Cloud Generative AI +- Interactive chat interface for querying PDF content + +## Installation + +### Prerequisites + +- Python 3.7 or higher +- Streamlit +- PyPDF2 +- LangChain +- Oracle Cloud AI services credentials +- OracleDB or Qdrant for vector storage + +### Install Dependencies + +``` +pip install streamlit PyPDF2 langchain langchain_community oracledb +``` + +### Running the Application + +1. Copy the provided code from a file named `chatWithPDF.py`. +2. Ensure your Oracle Cloud AI services credentials and configurations are set in the `config.py` file. +3. Execute the Streamlit app using the command: + +``` +streamlit run chatWithPDF.py +``` + +## User Interaction + +- **Upload PDFs**: Users can upload one or multiple PDF files via the sidebar. +- **Ask Questions**: Once PDFs are processed, users can input their questions into the text box at the bottom of the main page. + +## Code Summary + +### Configuration + +The application relies on a `config.py` file for various settings, including database connection details and model identifiers. + +### Extract Text from PDFs + +The `get_pdf_text` function utilizes PyPDF2 to extract text from uploaded PDF files: + +```python +def get_pdf_text(pdf_files): + ... + for pdf_file in pdf_files: + reader = PdfReader(pdf_file) + for page in reader.pages: + text += page.extract_text() + ... +``` + +### Text Chunking + +The `get_chunk_text` function employs LangChain's `CharacterTextSplitter` to split extracted text into smaller chunks: + +```python +def get_chunk_text(text): + text_splitter = CharacterTextSplitter( + ... + chunk_size=1000, + chunk_overlap=200, + ... + ) + chunks = text_splitter.split_text(text) + return chunks +``` + +### Create Vector Store + +The `get_vector_store` function establishes a vector store from text chunks using either OracleDB or Qdrant, based on the configuration: + +```python +def get_vector_store(text_chunks): + embeddings = OCIGenAIEmbeddings(...) + documents = [Document(page_content=chunk) for chunk in text_chunks] + + if config.DB_TYPE == "oracle": + connection = oracledb.connect(...) + vectorstore = OracleVS.from_documents( + documents=documents, + embedding=embeddings, + ... + ) + else: + vectorstore = Qdrant.from_documents( + documents=documents, + embedding=embeddings, + ... + ) + return vectorstore +``` + +### Conversational Chain + +The `get_conversation_chain` function sets up a conversational retrieval chain using the Oracle Cloud Generative AI model: + +```python +def get_conversation_chain(vector_store): + llm = ChatOCIGenAI(...) + memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) + conversation_chain = ConversationalRetrievalChain.from_llm( + llm=llm, + retriever=vector_store.as_retriever(), + memory=memory + ) + return conversation_chain +``` + +### Handle User Input + +The `handle_user_input` function processes user questions and displays the chat history in the Streamlit app: + +```python +def handle_user_input(question): + response = st.session_state.conversation({'question': question}) + st.session_state.chat_history = response['chat_history'] + ... +``` + +### Main Function + +The main function configures the Streamlit interface, handles file uploads, and initializes the conversation chain: + +```python +def main(): + ... + st.set_page_config(page_title='Chat with Your own PDFs', page_icon=':books:') + ... + st.header('Chat with Your own PDFs :books:') + + question = st.text_input("Ask anything to your PDF: ") + if question: + handle_user_input(question) + + with st.sidebar: + st.subheader("Upload your Documents Here: ") + pdf_files = st.file_uploader("Choose your PDF Files and Press OK", type=['pdf'], accept_multiple_files=True) + ... + +if __name__ == '__main__': + main() +``` + +## Explanation: + +1. **Embeddings Setup**: Initialize embeddings using the Oracle Cloud Generative AI model. +2. **Document Creation**: Convert each text chunk into a `Document` object. +3. **Database Connection**: Establish a connection to either OracleDB or Qdrant, depending on the configured database type. +4. **Vector Store Creation**: Create the vector store from the documents, utilizing the selected database, embedding model, and distance strategy. + +## Conclusion + +This documentation offers a thorough guide to understanding, setting up, and utilizing the Streamlit PDF chat application. By adhering to these instructions, users can recreate and customize the application to align with their specific requirements. + + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/5-pdf-comparison/pdf-comparison.md b/genai-smart-apps/5-pdf-comparison/pdf-comparison.md new file mode 100644 index 000000000..d002942bb --- /dev/null +++ b/genai-smart-apps/5-pdf-comparison/pdf-comparison.md @@ -0,0 +1,278 @@ +# Documentation for PDF Comparison Using Oracle Gen AI + +## Overview + +This Python application enables users to compare the content of multiple PDF files using Large Language Models (LLMs). Users can upload PDF files, ask specific questions, and receive comparative answers from each PDF. The app utilizes Streamlit for the user interface, LangChain for LLM orchestration, and Oracle Generative AI for the underlying AI services. + +## Features and Capabilities + +- **PDF Upload and Question Input**: Users can upload multiple PDF files and input up to four questions to query the content of these PDFs. +- **Embeddings Creation and Storage**: The application extracts text from the uploaded PDFs, creates embeddings using Oracle Generative AI, and stores these embeddings in a vector database (either OracleDB or Qdrant). +- **Query Analysis**: The application uses the stored embeddings to perform query analysis, retrieving relevant information from the PDFs based on the input questions. +- **Results Display**: The results of the queries are displayed in a structured table format, showing the responses from each PDF. + +## How It Works + +1. **User Interface**: The Streamlit-based user interface allows users to interact with the application by uploading PDFs and entering questions. +2. **Embeddings Creation**: Uploaded PDFs are processed to extract text, which is then converted into embeddings using Oracle Generative AI. +3. **Vector Database**: The embeddings are stored in a vector database for efficient retrieval. The application supports both OracleDB and Qdrant as storage options. +4. **LLM Integration**: The application integrates with Oracle's LLM to analyze the queries and retrieve relevant information from the PDFs. +5. **Data Display**: The application processes the query responses and displays them in a table format, highlighting the source document for each response. + +## Installation + +To install and run this application, follow these steps: + +1. **Install dependencies**: + ``` + pip install -r requirements.txt + ``` +2. **Set up configuration**: Ensure that the configuration settings for Oracle Generative AI and the embedding model are correctly set in the `config.py` file located in the `pages/utils` directory. +3. **Run the application**: + ``` + streamlit run docCompare.py + ``` + +## User Interaction + +1. **Upload PDF Files**: Use the file uploader to select and upload multiple PDF files. +2. **Input Questions**: Enter up to four questions in the provided text input fields. +3. **Submit**: Click the "Start Processing" button to begin the analysis. +4. **View Results**: The app will display the responses from each PDF in a table format, showing how each document answers the questions. + +## Code Summary + +Let's break down the main file `docCompare.py` and explain its logic and functionality in detail. + +### Main File: docCompare.py + +This file contains the main logic for the Streamlit application that compares PDF files using Oracle Generative AI. + +#### 1. Importing Libraries and Setting Up the Page + +```python +import pandas as pd +import streamlit as st +import streamlit.components.v1 as components +from PIL import Image +from pages.utils.lang_utils import ask_to_all_pdfs_sources, create_qa_retrievals + +st.set_page_config(...) +``` + +- **Libraries**: Imports necessary libraries such as pandas for data manipulation, streamlit for creating the web interface, components for embedding HTML/CSS, and PIL for handling images. +- **Configuration**: Sets up the Streamlit page configuration, including the page title, icon, layout, and initial sidebar state. + +#### 2. Sidebar Contents + +```python +with st.sidebar: + st.markdown( + """ + About + + This app is an pdf comparison (LLM-powered), built using: + + ... + """ + ) + st.write("Made with Oracle Generative AI") +``` + +- **Title and Description**: Displays the title and description of the app in the sidebar. +- **Libraries and AI**: Lists the technologies used to build the app, including Streamlit, LangChain, and Oracle Generative AI. +- **Acknowledgment**: Credits Oracle Generative AI for its contribution. + +#### 3. Main Title and Form for User Input + +``` + +Title_html = """ + + """ +components.html(Title_html) +<\copy> +``` + +- **HTML/CSS for Title**: Creates a styled title using HTML and CSS. The title has a rainbow gradient animation. + +```python +with st.form("basic_form"): + uploaded_files = st.file_uploader( + "Upload files", + type=["pdf"], + key="file_upload_widget", + accept_multiple_files=True, + ) + question_1 = st.text_input("Question 1", key="1_question") + question_2 = st.text_input("Question 2", key="2_question") + question_3 = st.text_input("Question 3", key="3_question") + question_4 = st.text_input("Question 4", key="4_question") + submit_btn = st.form_submit_button("Start Processing") +``` + +- **File Uploader**: Allows users to upload multiple PDF files. +- **Text Inputs for Questions**: Provides input fields for up to four questions. +- **Submit Button**: A button to start processing the uploaded PDFs and input questions. + +#### 4. Handling User Input and Processing + +```python +if submit_btn: + if question_1 == "": + st.warning("Give at least one question") + st.stop() + if uploaded_files is None: + st.warning("Upload at least 1 PDF file") + st.stop() + + all_questions = [question_1, question_2, question_3, question_4] + + with st.spinner("Creating embeddings...."): + try: + ... + st.exception(e) + st.stop() + st.success("Done!", icon="✅") + + with st.spinner("Doing Analysis...."): + try: + data = [] + for question in st.session_state.questions: + if question == "": + ... + with st.spinner("Doing Analysis.."): + try: + df = pd.DataFrame(st.session_state.data) + ... + except Exception as e: + st.exception(e) + st.stop() + st.balloons() +``` + +- **Form Submission Handling**: When the submit button is clicked: + - **Validation**: Checks if at least one question is provided and at least one PDF file is uploaded. + - **Creating Embeddings**: Calls `create_qa_retrievals` to create embeddings for the uploaded PDFs. + - **Analyzing Questions**: Iterates through the questions, uses `ask_to_all_pdfs_sources` to get answers from each PDF, and stores the results. + - **Displaying Results**: Converts the results into a DataFrame, removes duplicates if any, and displays the responses in a table. + +## Utility File: lang_utils.py + +This file contains utility functions to handle PDF parsing, embeddings creation, and querying the LLM. + +### 1. Importing Libraries and Configuration + +```python +import streamlit as st +from pages.utils.pdf_parser import PDFParser +from langchain.chains import RetrievalQA +import os +from langchain_community.embeddings import OCIGenAIEmbeddings +import oracledb +import pages.utils.config as config +from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI +from langchain_community.vectorstores.oraclevs import OracleVS +from langchain_community.vectorstores.utils import DistanceStrategy +from langchain.vectorstores import Qdrant +``` + +- **Libraries**: Imports necessary libraries for handling PDFs, embeddings, vector stores, and LLM interactions. +- **Configuration**: Imports configuration settings for Oracle Generative AI and database connections. + +### 2. Parsing PDF Files + +```python +def get_text_from_pdf(pdf_path): + parser = PDFParser(config.COMPARTMENT_ID) + docs = parser.parse_pdf(pdf_path) + return docs +``` + +- **PDF Parsing**: Uses `PDFParser` to extract text content from a given PDF file. + +```python +def get_text_splitter(pdf_file): + with open("temp_pdf.pdf", "wb") as f: + f.write(pdf_file.getbuffer()) + text = get_text_from_pdf("temp_pdf.pdf") + os.remove("temp_pdf.pdf") + return text +``` + +- **Temporary File Handling**: Saves the uploaded PDF file temporarily to extract text and then removes the temporary file. + +### 3. Creating Embeddings and QA Retrievals + +```python +def create_qa_retrievals(pdf_file_list: list): + qa_retrievals = [] + for pdf in pdf_file_list: + texts = get_text_splitter(pdf) + text_strings = [doc.page_content for doc in texts] + metadatas = [{"source": f"{i}-{pdf.name}", "topics": doc.metadata['topics'], "page": doc.metadata['page']} for i, doc in enumerate(texts)] + embeddings = OCIGenAIEmbeddings(...) + if config.DB_TYPE == "oracle": + try: + connection = oracledb.connect(user=...) + except Exception as e: + print("Connection to OracleDB failed!") + return + else: + db = Qdrant.from_texts(... ) + st.info(f"Saving {pdf.name} to vector DB") + llm = ChatOCIGenAI(... ) + qa_retrieval = RetrievalQA.from_chain_type( + llm=llm, ... ) + qa_retrievals.append(qa_retrieval) + return qa_retrievals +``` + +- **QA Retrievals Creation**: + - **Text Splitting**: Extracts text from each PDF file. + - **Embeddings**: Uses `OCIGenAIEmbeddings` to create embeddings for the extracted text. + - **Vector Store**: Saves the embeddings to either OracleDB or Qdrant, based on the configuration. + - **LLM**: Initializes the LLM with `ChatOCIGenAI`. + - **QA Chain**: Creates a QA retrieval chain and appends it to the list. + +### 4. Querying the PDFs + +```python +def ask_to_all_pdfs_sources(query: str, qa_retrievals: list): + data = [] + for i, qa in enumerate(qa_retrievals): + response = qa.run(query) + for doc in response['source_documents']: + doc_name = doc.metadata['source'] + data.append( + { + "query": query, + "response": response["result"], + "source_document": doc_name, + } + ) + return data +``` + +- **Querying PDFs**: + - Iterates through the QA retrievals, runs the query on each, and collects the responses. + - Stores the query, response, and source document information in a list. + +## Conclusion + +The PDF Comparison - LLM application showcases the power of combining Streamlit, LangChain, and Oracle Generative AI to create a robust tool for document analysis. By leveraging advanced embeddings and vector databases, the application provides accurate and efficient comparisons of PDF content, making it valuable for various use cases in research, legal, and business domains. The user-friendly interface ensures that even non-technical users can benefit from the capabilities of large language models and AI-driven document analysis. + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/6-document-summarization/document-summarization.md b/genai-smart-apps/6-document-summarization/document-summarization.md new file mode 100644 index 000000000..31fbf0aa3 --- /dev/null +++ b/genai-smart-apps/6-document-summarization/document-summarization.md @@ -0,0 +1,178 @@ +# Documentation for Document Summarization App + +## Overview + +This Streamlit application enables users to upload PDF files and generate summaries using a variety of language models (LLMs) from Oracle Cloud AI services. It offers a customizable summarization process by allowing users to select different summarization strategies and adjust parameters such as chunk size, chunk overlap, temperature, and maximum token output. + +## Features + +- Upload PDF documents for summarization. +- Choose from different LLMs, including cohere.command-r-16k, cohere.command-r-plus, and meta.llama-3-70b-instruct. +- Customize summarization prompts and select from various summarization strategies (map_reduce, stuff, and refine). +- Interactive interface for viewing and summarizing document content. + +## Installation + +### Prerequisites + +- Python 3.7 or later +- Streamlit +- pypdf +- langchain +- python-dotenv + +### Install Dependencies + +``` +pip install streamlit pypdf langchain python-dotenv +``` + +### Running the Application + +1. Copy the provided code from a file named `documentSummarization.py`. +2. Ensure your Oracle Cloud AI services credentials and other configurations are set in the `config.py` file. +3. Run the Streamlit app using the following command: + +``` +streamlit run documentSummarization.py +``` + +## User Interaction + +- **Select LLM**: Choose an LLM from the sidebar options. +- **Select Chain Type**: Choose a summarization strategy. +- **Adjust Parameters**: Tune parameters like chunk size, chunk overlap, temperature, and maximum token output. +- **Enter Summary Prompt**: Provide a custom prompt for document summarization. +- **Upload PDF**: Upload a PDF document. +- **Generate Summary**: Click the "Summarize" button to generate the summary. + +## Code Summary + +### Environment Setup + +The code begins by importing necessary libraries and setting up the Streamlit environment. It loads configuration details and configures the Oracle Cloud Generative AI model using the `ChatOCIGenAI` class. + +```python +import streamlit as st +import os +from langchain.document_loaders import PyPDFLoader +from langchain.prompts import PromptTemplate +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.chains.summarize import load_summarize_chain +from pypdf import PdfReader +from io import BytesIO +from typing import Any, Dict, List +import re +from langchain.docstore.document import Document +from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI +import pages.utils.config as config +from pages.utils.style import set_page_config + +set_page_config() +``` + +### PDF Parsing + +The `parse_pdf` function reads and extracts text from uploaded PDF files using the `pypdf` library. It cleans and formats the extracted text for further processing. + +```python +@st.cache_data +def parse_pdf(file: BytesIO) -> List[str]: + pdf = PdfReader(file) + output = [] + for page in pdf.pages: + text = page.extract_text() + text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text) + text = re.sub(r"(?\ List[Document]: + if isinstance(text, str): + text = [text] + page_docs = [Document(page_content=page) for page in text] + for i, doc in enumerate(page_docs): + doc.metadata["page"] = i + 1 + doc_chunks = [] + # Code snippet for doc_chunks generation + return doc_chunks +``` + +### Custom Summary Function + +The `custom_summary` function generates document summaries using the specified LLM and summarization strategy. It configures prompt templates and loads the appropriate summarization chain. + +```python +def custom_summary(docs, llm, custom_prompt, chain_type, num_summaries): + custom_prompt = custom_prompt + ":\n {text}" + COMBINE_PROMPT = PromptTemplate(template=custom_prompt, input_variables=["text"]) + MAP_PROMPT = PromptTemplate(template="Summarize:\n{text}", input_variables=["text"]) + if chain_type == "map_reduce": + chain = load_summarize_chain(...) + else: + chain = load_summarize_chain(llm, chain_type=chain_type) + summaries = [] + for i in range(num_summaries): + summary_output = chain({"input_documents": docs}, return_only_outputs=True)["output_text"] + summaries.append(summary_output) + return summaries +``` + +### Main Streamlit Application + +The `main` function sets up the Streamlit interface, handles user inputs, and displays the results. Users can interact with the app to upload PDFs, select LLMs, adjust parameters, and generate summaries. + +```python +def main(): + st.markdown(hide_streamlit_style, unsafe_allow_html=True) + st.title("Document Summarization App") + llm_name = st.sidebar.selectbox("LLM", ["cohere.command-r-16k", "cohere.command-r-plus", "meta.llama-3-70b-instruct"]) + chain_type = st.sidebar.selectbox("Chain Type", ["map_reduce", "stuff", "refine"]) + chunk_size = ... + chunk_overlap = ... + user_prompt = ... + temperature = ... + max_token = ... + opt = "Upload-own-file" + pages = None + if opt == "Upload-own-file": + uploaded_file = st.file_uploader("**Upload a Pdf file :**", type=["pdf"])... + llm = ChatOCIGenAI(...) + if st.button("Summarize"): + with st.spinner('Summarizing....'): + result = custom_summary(pages, llm, user_prompt, chain_type, 1) + ... + +if __name__ == "__main__": + main() +``` + +## Summarization Models and Strategies + +### Summarization Models + +- `cohere.command-r-16k`: Suitable for large-scale summarization tasks. +- `cohere.command-r-plus`: Enhanced model with advanced capabilities. +- `meta.llama-3-70b-instruct`: State-of-the-art model for detailed and accurate summarization. + +### Summarization Strategies + +- `map_reduce`: Splits the document into chunks, summarizes each chunk, and then combines the summaries. +- `stuff`: Processes the entire document in a single step. +- `refine`: Iteratively refines the summary for improved accuracy. + +By following the documentation and utilizing the provided code snippets, users can recreate and customize the Document Summarization App to suit their specific requirements. + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/7-web-content-chat-summarize/web-content-chat-summarize.md b/genai-smart-apps/7-web-content-chat-summarize/web-content-chat-summarize.md new file mode 100644 index 000000000..99f579108 --- /dev/null +++ b/genai-smart-apps/7-web-content-chat-summarize/web-content-chat-summarize.md @@ -0,0 +1,186 @@ +# Documentation for Web Content Analysis Application + +## Overview + +This documentation provides a comprehensive guide for setting up and running a Streamlit application that enables users to analyze content from Wikipedia articles or web pages using Oracle Cloud AI services. The application retrieves content, processes it into chunks, generates a vector store for efficient access, and employs a conversational AI model to respond to user queries. + +## Features + +- Fetch and process content from Wikipedia articles or web pages. +- Divide the extracted content into chunks for efficient processing. +- Create a vector store using either OracleDB or Qdrant. +- Leverage Oracle Cloud Generative AI for conversational analysis. +- Provide an interactive user interface for users to ask questions about the content. + +## Installation + +### Prerequisites + +- Python 3.7 or higher +- Streamlit +- requests +- BeautifulSoup +- langchain +- oracledb +- Oracle Cloud AI services credentials +- OracleDB or Qdrant for vector storage + +### Install Dependencies + +To install the required dependencies, run the following command: + +```bash +pip install streamlit requests beautifulsoup4 langchain oracledb +``` + +## Running the Application + +1. Copy the provided code from a file named `webPageChat.py`. +2. Ensure that your Oracle Cloud AI services credentials and other configurations are set in the `config.py` file. +3. Start the Streamlit app by executing the following command: + +```bash +streamlit run webPageChat.py +``` + +## User Interaction + +- **Enter Wikipedia Topic or URL**: Users can input a Wikipedia topic or a URL in the provided text box. +- **Ask Questions**: Once the content is fetched and processed, users can pose questions about the content. +- **Advanced Options**: Users can fine-tune settings such as chunk size, overlap, and the number of top results to retrieve. + +## Code Summary + +### Timer Decorator + +The `timeit` decorator measures the execution time of functions. It wraps a function, records its start time, executes it, and then displays the elapsed time. + +```python +def timeit(func): + ... + return new_func +``` + +### Fetching Article from Wikipedia + +The `fetching_article` function retrieves content from Wikipedia and processes it into chunks. It utilizes `OCIGenAIEmbeddings` to create vector embeddings and stores them in either OracleDB or Qdrant based on the configuration. + +```python +def fetching_article(wikipediatopic, chunk_size, chunk_overlap): + embeddings = OCIGenAIEmbeddings(...) + wikipage = WikipediaQueryRun(...) + text = wikipage.run(wikipediatopic) + text_splitter = CharacterTextSplitter(...) + chunks = text_splitter.split_text(text) + ... + if config.DB_TYPE == "oracle": + ... + knowledge_base = OracleVS.from_texts(...) + else: + knowledge_base = Qdrant.from_texts(...) + return knowledge_base +``` + +This function manages the connection to the database and divides the Wikipedia article text into manageable chunks for analysis. + +### Fetching Content from URL + +The `fetching_url` function fetches and processes content from a URL. It uses BeautifulSoup to extract text from the page, splits the text into chunks, and creates a vector store. + +```python +def fetching_url(userinputquery, chunk_size, chunk_overlap): + embeddings = OCIGenAIEmbeddings(...) + page = requests.get(userinputquery) + soup = BeautifulSoup(page.text, 'html.parser') + text = soup.get_text() + text_splitter = CharacterTextSplitter(...) + chunks = text_splitter.split_text(text) + ... + if config.DB_TYPE == "oracle": + ... + knowledge_base = OracleVS.from_texts(...) + else: + knowledge_base = Qdrant.from_texts(...) + return knowledge_base +``` + +This function ensures that the text from the URL is correctly parsed and stored for subsequent retrieval and analysis. + +### Prompting the LLM + +The `prompting_llm` function prompts the LLM with the user's question and obtains a response. It performs a similarity search to identify relevant chunks, displays the prompt and results, and calculates the prompt length. + +```python +def prompting_llm(user_question, _knowledge_base, _chain, k_value): + with st.spinner(text="Prompting LLM..."): + doc_to_prompt = _knowledge_base.similarity_search(user_question, k=k_value) + docs_stats = _knowledge_base.similarity_search_with_score(user_question, k=k_value) + ... + prompt_len = _chain.prompt_length(docs=doc_to_prompt, question=user_question) + response = _chain.invoke({"input_documents": doc_to_prompt, "question": user_question}, return_only_outputs=True).get("output_text") + return response +``` + +This function ensures that user queries are effectively handled, and relevant answers are provided using the pre-processed text chunks. + +### Chunk Search + +The `chunk_search` function retrieves similar chunks based on the user's question and presents them. It performs a similarity search and formats the results for display. + +```python +def chunk_search(user_question, _knowledge_base, k_value): + with st.spinner(text="Prompting LLM..."): + doc_to_prompt = _knowledge_base.similarity_search(user_question, k=k_value) + docs_stats = _knowledge_base.similarity_search_with_score(user_question, k=k_value) + ... + return result +``` + +This function enables users to view the individual text chunks that are most pertinent to their query. + +### Main Function + +The `main` function sets up the Streamlit app, manages user inputs, and displays responses. It initializes the OCIGenAI model, loads the question-answering chain, and configures the user interface. + +```python +def main(): + llm = ChatOCIGenAI(...) + chain = load_qa_chain(llm, chain_type="stuff") + ... + st.header("Ask any website using Oracle Gen AI") + with st.expander("Advanced options"): + k_value = st.slider(...) + chunk_size = st.slider(...) + chunk_overlap = st.slider(...) + chunk_display = st.checkbox("Display chunk results")... + if userinputquery: + if userinputquery.startswith("http"): + knowledge_base = fetching_url(userinputquery, chunk_size, chunk_overlap) + else: + knowledge_base = fetching_article(userinputquery, chunk_size, chunk_overlap) + ... + user_question = st.text_input("Ask a question about the loaded content:") + promptoption = st.selectbox(...) + ... + if user_question: + response = prompting_llm("This is a web page, based on this text " + user_question.strip(), knowledge_base, chain, k_value) + st.write("_"+user_question.strip()+"_") + st.write(response) + if chunk_display: + chunk_display_result = chunk_search(user_question.strip(), knowledge_base, k_value) + with st.expander("Chunk results"): + st.code(chunk_display_result) +``` + +This function orchestrates the entire process, from content retrieval to processing user queries and presenting results interactively. + +## Conclusion + +This application empowers users to analyze web content using advanced AI capabilities offered by Oracle Cloud. By following the setup instructions and understanding the code structure, you can customize and extend the functionality to suit your specific requirements. + + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/8-chat-with-youtube/chat-with-youtube.md b/genai-smart-apps/8-chat-with-youtube/chat-with-youtube.md new file mode 100644 index 000000000..5b50acac2 --- /dev/null +++ b/genai-smart-apps/8-chat-with-youtube/chat-with-youtube.md @@ -0,0 +1,177 @@ +# Documentation for YouTube Video Transcript Analysis Application + +## Overview + +This documentation provides a comprehensive guide for setting up and running a Streamlit application that enables users to analyze YouTube video transcripts using Oracle Cloud AI services. The application fetches video transcripts, processes them into chunks, generates embeddings, and employs a conversational AI model to respond to user queries. + +## Features + +- Fetch and process YouTube video transcripts. +- Split transcripts into chunks for efficient processing. +- Generate embeddings and create a vector store using OracleDB or Qdrant for rapid retrieval. +- Utilize Oracle Cloud Generative AI for conversational analysis. +- Offer an interactive user interface for users to ask questions about the video content. + +## Installation + +### Prerequisites + +- Python 3.7 or later +- Streamlit +- requests +- langchain +- youtube_transcript_api +- Oracle Cloud AI services credentials +- OracleDB or Qdrant for vector storage + +### Install Dependencies + +```bash +pip install streamlit requests langchain youtube-transcript-api oracledb +``` + +### Running the Application + +1. Copy the provided code from a file named `youtubeChat.py`. +2. Ensure your Oracle Cloud AI services credentials and other configurations are set in the `config.py` file. +3. Run the Streamlit app using the following command: + +```bash +streamlit run youtubeChat.py +``` + +## User Interaction + +- **Enter YouTube Video ID/URL**: Users can input a YouTube video ID or URL in the provided text box. +- **Ask Questions**: Once the transcript is fetched and processed, users can type their questions about the video content. +- **Advanced Options**: Users can adjust settings such as chunk size, overlap, and the number of top results to display. + +## Code Summary + +### Fetching YouTube Video ID + +The `fetching_youtubeid` function extracts the YouTube video ID from a URL. + +```python +def fetching_youtubeid(youtubeid): + if "youtu" in youtubeid: + data = re.findall(r"(?:v=\|\\)(\[0-9A-Za-z\_-\]{11}).\*", youtubeid) + youtubeid = data\[0\] + return youtubeid +``` + +### Fetching and Splitting Transcript + +The `fetching_transcript` function fetches and splits the YouTube video transcript into chunks using `YouTubeTranscriptApi` to retrieve the transcript and `CharacterTextSplitter` to create chunks. + +```python +def fetching_transcript(youtubeid, chunk_size, chunk_overlap): + youtubeid = fetching_youtubeid(youtubeid) + transcript = YouTubeTranscriptApi.get_transcript(youtubeid, languages=\['pt', 'en'\]) + formatter = TextFormatter() + text = formatter.format_transcript(transcript) + text_splitter = CharacterTextSplitter... + chunks = text_splitter.split_text(text) + return chunks +``` + +### Creating Vector Store + +The `get_vector_store` function generates embeddings and creates a vector store using OracleDB or Qdrant for efficient retrieval. + +```python +def get_vector_store(chunks): + embeddings = OCIGenAIEmbeddings( + model_id=embeddingModel, ... ) + if config.DB_TYPE == "oracle": + connection = oracledb.connect(user=config.ORACLE_USERNAME, + password=config.ORACCoefficient, + dsn=config.ORACLE_DSN) + knowledge_base = OracleVS.from_texts(... ) + else: + knowledge_base = Qdrant.from_texts(... ) + return knowledge_base +``` + +### Prompting the LLM + +The `prompting_llm` function prompts the LLM with the user’s question and retrieves the response. + +```python +def prompting_llm(user_question, knowledge_base, chain, k_value): + with st.spinner(text="Prompting LLM..."): + doc_to_prompt = knowledge_base.similarity_search(user_question, k=k_value) + response = chain.invoke({"input_documents": doc_to_prompt, "question": user_question}, return_only_outputs=True).get("output_text") + return response +``` + +### Chunk Search + +The `chunk_search` function retrieves similar chunks based on the user's question and displays them. + +```python +def chunk_search(user_question, knowledge_base, k_value): + with st.spinner(text="Searching chunks..."): + doc_to_prompt = knowledge_base.similarity_search(user_question, k=k_value) + docs_stats = knowledge_base.similarity_search_with_score(user_question, k=k_value) + result = ' \n '+datetime.datetime.now().astimezone().isoformat() + result = result + " \nPrompt: "+user_question+ " \n" + for x in range(len(docs_stats)): + try: + result = result + ' \n'+str(x)+' -------------------' + content, score = docs_stats\[x\] + result = result + " \nContent: "+content.page_content + result = result + " \n \nScore: "+str(score)+" \n" + except: + pass + return result +``` + +### Main Function + +The main function sets up the Streamlit app, handles user inputs, and displays responses. + +```python +def main(): + llm = ChatOCIGenAI(... ) + chain = load_qa_chain(llm, chain_type="stuff") + if hasattr(chain.llm_chain.prompt, 'messages'): + ... + st.header("Ask Youtube using Oracle GEN AI") + youtubeid = st.text_input('Enter the desired Youtube video ID or URL here.') + with st.expander("Advanced options"): + k_value = st.slider... + chunk_size = st.slider(... + chunk_overlap = st.slider('... + chunk_display = st.checkbox("Display chunk results") + if youtubeid: + knowledge_base = fetching_transcript(youtubeid, chunk_size, chunk_overlap) + user_question = st.text_input("Ask a question about the Youtube video:") + promptoption = st.selectbox( + '... + ("Summarize the transcript", "Summarize the transcript in bullet points"), index=None, + placeholder="Select a prompt template..." + ) + ... + st.write(response) + if chunk_display: + ... + +if \_\_name\_\_ == "\_\_main\_\_": + main() +``` + +## Explanation + +This section provides a high-level overview of the key functions and their purposes within the application. Each function is designed to handle specific tasks, such as fetching YouTube video IDs, processing transcripts, generating embeddings, prompting the LLM, and performing chunk searches. This modular design ensures that each component is easily understandable and modifiable, allowing for further customization and enhancement. + +## Conclusion + +This application demonstrates the capabilities of Oracle Gen AI services by analyzing YouTube video transcripts and providing insightful responses to user queries. By following the provided setup instructions, users can effortlessly run the application and benefit from its interactive and informative features. The modular code structure encourages customization, making it a versatile tool for analyzing video content. + + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea , AI and App Integration Specialist Leader + +**Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/genai-smart-apps/workshops/sandbox/index.html b/genai-smart-apps/workshops/sandbox/index.html new file mode 100644 index 000000000..aaac634be --- /dev/null +++ b/genai-smart-apps/workshops/sandbox/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/genai-smart-apps/workshops/sandbox/manifest.json b/genai-smart-apps/workshops/sandbox/manifest.json new file mode 100644 index 000000000..0c3b9cc72 --- /dev/null +++ b/genai-smart-apps/workshops/sandbox/manifest.json @@ -0,0 +1,46 @@ +{ + "workshoptitle": "How Generative AI helps you build smarter applications", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Overview and Highlights", + "filename": "../../1-overview-and-highlights/overview-and-highlights.md" + }, + { + "title": "Get Started", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "LAB 1: Setup", + "filename": "../../2-setup/setup.md" + }, + { + "title": "LAB 2: Gen Ai Chat App", + "filename": "../../3-oci-genai-chat-application/oci-genai-chat-application.md" + }, + { + "title": "LAB 3: Chat with PDF documents", + "filename": "../../4-chat-with-pdf/chat-with-pdf.md" + }, + { + "title": "LAB 4: Compare PDF files", + "filename": "../../5-pdf-comparison/pdf-comparison.md" + }, + { + "title": "LAB 5: Document Summarization application", + "filename": "../../6-document-summarization/document-summarization.md" + }, + { + "title": "LAB 6: Chat and summarize web pages", + "filename": "../../7-web-content-chat-summarize/web-content-chat-summarize.md" + }, + { + "title": "LAB 7: Chat and summarize youTube Videos", + "filename": "../../8-chat-with-youtube/chat-with-youtube.md" + }, + { + "title": "Need Help?", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +} \ No newline at end of file diff --git a/genai-smart-apps/workshops/tenancy/index.html b/genai-smart-apps/workshops/tenancy/index.html new file mode 100644 index 000000000..aaac634be --- /dev/null +++ b/genai-smart-apps/workshops/tenancy/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/genai-smart-apps/workshops/tenancy/manifest.json b/genai-smart-apps/workshops/tenancy/manifest.json new file mode 100644 index 000000000..0c3b9cc72 --- /dev/null +++ b/genai-smart-apps/workshops/tenancy/manifest.json @@ -0,0 +1,46 @@ +{ + "workshoptitle": "How Generative AI helps you build smarter applications", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Overview and Highlights", + "filename": "../../1-overview-and-highlights/overview-and-highlights.md" + }, + { + "title": "Get Started", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "LAB 1: Setup", + "filename": "../../2-setup/setup.md" + }, + { + "title": "LAB 2: Gen Ai Chat App", + "filename": "../../3-oci-genai-chat-application/oci-genai-chat-application.md" + }, + { + "title": "LAB 3: Chat with PDF documents", + "filename": "../../4-chat-with-pdf/chat-with-pdf.md" + }, + { + "title": "LAB 4: Compare PDF files", + "filename": "../../5-pdf-comparison/pdf-comparison.md" + }, + { + "title": "LAB 5: Document Summarization application", + "filename": "../../6-document-summarization/document-summarization.md" + }, + { + "title": "LAB 6: Chat and summarize web pages", + "filename": "../../7-web-content-chat-summarize/web-content-chat-summarize.md" + }, + { + "title": "LAB 7: Chat and summarize youTube Videos", + "filename": "../../8-chat-with-youtube/chat-with-youtube.md" + }, + { + "title": "Need Help?", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +} \ No newline at end of file diff --git a/multichannel-oci-genaiagents/1-overview-and-highlights/overview-and-highlights.md b/multichannel-oci-genaiagents/1-overview-and-highlights/overview-and-highlights.md new file mode 100644 index 000000000..734014a40 --- /dev/null +++ b/multichannel-oci-genaiagents/1-overview-and-highlights/overview-and-highlights.md @@ -0,0 +1,36 @@ +# Introduction + +Companies around the world rely on standardised communication platforms for running their business - inside and outside of the organization. From raising expenses and IT support tickets through Slack, to selling insurance policies or transportation tickets through WhatsApp, all around the world, AI is coming closer to users by way of familiar communication channels. +Workshop Description: Oracle Cloud Infrastructure (OCI) Generative AI Agents combines the power of large language models (LLMs) and retrieval-augmented generation (RAG) with your enterprise data, letting users query diverse enterprise knowledge bases. Coming soon, the service will provide users with up-to-date information through a natural language interface and the ability to act directly on it. See how Oracle supports the quick implementation of secure enterprise RAG, using your organization’s data, with seamless access through various channels, like Slack, Teams, WhatsApp and others. + + +## About this Workshop + +In this workshop you will learn how to harness the power of conversational Generative AI to unlock the information hidden in your documents through different channels like Slack, Ms teams etc + +Estimated Workshop Time: 1 hour 30 minutes + +### Objectives + +In this workshop, you will learn how to: + +* Setup the required policies and objects to allow the service to function. +* Prepare and upload dataset files for your Agent to reason over. +* Create a Knowledge Base which will ingest and index your data. +* Create an Agent which will reason over your indexes data. +* Have a conversation with your data - Using the Agent's chat interface, ask questions about your data. +* Connect ODA with OCI GEN Ai Agent +* Create channel and connect ODA with Slack + +## Learn More + +* [OCI Generative AI Agents service information](https://www.oracle.com/artificial-intelligence/generative-ai/agents/) +* [OCI Generative AI Agents service documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai-agents/home.htm) +* [ODA Channels](https://docs.oracle.com/en/cloud/paas/digital-assistant/use-chatbot/channels-part-topic.html) +* [Connecting ODA with Slack](https://docs.oracle.com/en/cloud/paas/digital-assistant/use-chatbot/slack.html#GUID-311B18A9-B101-4107-95AD-D7B9E1539B25) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 diff --git a/multichannel-oci-genaiagents/2-setup/images/agents-service-navigation.png b/multichannel-oci-genaiagents/2-setup/images/agents-service-navigation.png new file mode 100644 index 000000000..b2d98e458 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/agents-service-navigation.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/create-dynamic-group.png b/multichannel-oci-genaiagents/2-setup/images/create-dynamic-group.png new file mode 100644 index 000000000..b7f0143f8 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/create-dynamic-group.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/create-new-policy-navigation.png b/multichannel-oci-genaiagents/2-setup/images/create-new-policy-navigation.png new file mode 100644 index 000000000..16483040c Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/create-new-policy-navigation.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/create-new-policy.png b/multichannel-oci-genaiagents/2-setup/images/create-new-policy.png new file mode 100644 index 000000000..1b526af40 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/create-new-policy.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/default-domain-navigation.png b/multichannel-oci-genaiagents/2-setup/images/default-domain-navigation.png new file mode 100644 index 000000000..3ef76e6e6 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/default-domain-navigation.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/domains-navigation.png b/multichannel-oci-genaiagents/2-setup/images/domains-navigation.png new file mode 100644 index 000000000..3cfb58894 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/domains-navigation.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/dynamic-group.png b/multichannel-oci-genaiagents/2-setup/images/dynamic-group.png new file mode 100644 index 000000000..52b1fa845 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/dynamic-group.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/infrastructure-regions.png b/multichannel-oci-genaiagents/2-setup/images/infrastructure-regions.png new file mode 100644 index 000000000..9e47cb0bb Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/infrastructure-regions.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/policies-navigation.png b/multichannel-oci-genaiagents/2-setup/images/policies-navigation.png new file mode 100644 index 000000000..623d48ade Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/policies-navigation.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/regions-list.png b/multichannel-oci-genaiagents/2-setup/images/regions-list.png new file mode 100644 index 000000000..d834b3ae0 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/regions-list.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/select-chicago-region.png b/multichannel-oci-genaiagents/2-setup/images/select-chicago-region.png new file mode 100644 index 000000000..47107f64e Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/select-chicago-region.png differ diff --git a/multichannel-oci-genaiagents/2-setup/images/subscribe-new-region-dialog.png b/multichannel-oci-genaiagents/2-setup/images/subscribe-new-region-dialog.png new file mode 100644 index 000000000..9dd6fab63 Binary files /dev/null and b/multichannel-oci-genaiagents/2-setup/images/subscribe-new-region-dialog.png differ diff --git a/multichannel-oci-genaiagents/2-setup/setup.md b/multichannel-oci-genaiagents/2-setup/setup.md new file mode 100644 index 000000000..e8684f956 --- /dev/null +++ b/multichannel-oci-genaiagents/2-setup/setup.md @@ -0,0 +1,158 @@ +# Setup + +## Introduction + +In this lab we are going to perform the actions required to enable the service on our tenancy. +After that, we will create the objects required for subsequent labs. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + +* Make sure that our tenancy is subscribed to the Chicago region. +* Create the required permissions for us to be able to use the service in our tenancy. + +### Prerequisites + +This lab assumes you have: + +* An Oracle Cloud account + +## Task 1: Ensure Chicago region subscription + +The OCI Generative AI Agents service is currently only available in the Chicago region. +If your tenancy is already subscribed to the Chicago region, please skip to the next task. + +1. On the top right, click the Regions drop down menu. + + ![Regions list](./images/regions-list.png) + +1. Review the list of regions your tenancy is subscribed in. If you find the **US Midwest (Chicago)** region in the list, please skip to the next task. + +1. Click the Manage Regions link at the bottom of the list. + +1. In the **Infrastructure Regions** list, locate the **US Midwest (Chicago)** region and click the subscribe button to it's right. + + > **Note:** When you subscribe to a region, you cannot unsubscribe from it. + + ![Infrastructure regions list](./images/infrastructure-regions.png) + +1. Click the **Subscribe** button at the bottom of the **Subscribe to New Region** dialog. + + ![Subscribe to new region dialog](./images/subscribe-new-region-dialog.png) + +The operation might take a few minutes to complete. When complete, the new region will appear on the **US Midwest (Chicago)** will appear in the **Regions** drop down menu on the main screen. + +## Task 2: Create access policies + +In this task, we are going to create policies which will grant us access to the OCI Generative AI Agents service as well as the Object Storage service. +We will use Object Storage to store the dataset required for this workshop. + +First, we are going to create a dynamic group which will allow us to grant access to the OCI Generative AI Agent service to the dataset uploaded to Object Storage. + +1. Click the navigation menu on the top left. + +1. Click **Identity & Security**. + +1. Click **Domains**. + + ![Domains navigation](./images/domains-navigation.png) + +1. Under the **List scope**, make sure that the **root** compartment is selected. + +1. Click the **Default** domain from the **Domains** table. + + ![Default domain navigation](./images/default-domain-navigation.png) + +1. On the left click **Dynamic Groups**. + +1. Click thd **Create dynamic group** button at the top of the **Dynamic groups** table. + + ![Dynamic group navigation](./images/dynamic-group.png) + +1. Name the dynamic group (example: oci-generative-ai-agents-service) + +1. Provide an optional description (example: This group represents the OCI Generative AI Agents service) + +1. Select the **Match all rules defined below** option in the **Matching rules** section. + +1. Enter the following expression in the **Rule 1** textbox: + + ```text + + all {resource.type='genaiagent'} + + ``` + + ![Create dynamic group](./images/create-dynamic-group.png) + +Next, we will create the access policies: + +1. Click **Identity & Security**. + +1. Click **Policies**. + + ![Policies navigation](./images/policies-navigation.png) + +1. On the left under **List scope**, select the root compartment. The root compartment should appear first in the list, have the same name as the tenancy itself and have the text **(root)** next to it's name. + +1. Click the **Create Policy** button on the top left of the **Policies** table. + + ![Create new policy navigation](./images/create-new-policy-navigation.png) + +1. Provide a name for the policy (example: oci-generative-ai-agents-service). + +1. Provide a description (example: OCI Generative AI Agents CloudWorld 2024 Hands-On-Lab Policy). + +1. Make sure that the root compartment is selected. + +1. Enable the **Show manual editor** option. + +1. In the policy editor, enter the following policy statements: + + ```text + + allow group to manage genai-agent-family in tenancy + allow group to manage object-family in tenancy + allow dynamic-group to manage all-resources in tenancy + + ``` + + Make sure to replace `` with the user group your user is associated with (for example: `Administrators`). + Also, please replace `` with the name you've provided for the dynamic group created above. + + ![Create new policy](./images/create-new-policy.png) + +## Task 3: Verify access to the service + +1. On the top right, click the Regions drop down menu. + +1. Click the **US Midwest (Chicago)**. + +1. Verify that the appears in bold to indicate it is the active region. + + ![Select the Chicago region](./images/select-chicago-region.png) + +1. Click the navigation menu on the top left. + +1. Click **Analytics & AI**. + +1. Click **Generative AI Agents** under **AI Services**. + + If the **Generative AI Agents** service does not appear under **AI Services**, please review previous tasks. + + ![Agents service navigation](./images/agents-service-navigation.png) + +## Learn More + +* [Region subscription](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingregions.htm#ariaid-title7) +* [Managing Dynamic Groups](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea +, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/files/oci-generative-ai-agents-cw24-hol-dataset.zip b/multichannel-oci-genaiagents/3-prepare-the-dataset/files/oci-generative-ai-agents-cw24-hol-dataset.zip new file mode 100644 index 000000000..f01b793c0 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/files/oci-generative-ai-agents-cw24-hol-dataset.zip differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/bucket-navigation.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/bucket-navigation.png new file mode 100644 index 000000000..28b955710 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/bucket-navigation.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-list.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-list.png new file mode 100644 index 000000000..078a37457 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-list.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-navigation.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-navigation.png new file mode 100644 index 000000000..5e9bc5dcc Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/buckets-navigation.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/create-bucket.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/create-bucket.png new file mode 100644 index 000000000..ab9a3973e Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/create-bucket.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-mac.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-mac.png new file mode 100644 index 000000000..10b8d216b Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-mac.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-windows.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-windows.png new file mode 100644 index 000000000..2bd9744ab Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/downloaded-dataset-windows.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extract-dataset-windows.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extract-dataset-windows.png new file mode 100644 index 000000000..e79cf67d6 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extract-dataset-windows.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-mac.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-mac.png new file mode 100644 index 000000000..70a10b4dd Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-mac.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-windows.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-windows.png new file mode 100644 index 000000000..87bce4221 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/extracted-dataset-windows.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/objects-list.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/objects-list.png new file mode 100644 index 000000000..cc72a7585 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/objects-list.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/right-click-dataset-windows.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/right-click-dataset-windows.png new file mode 100644 index 000000000..5522551c3 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/right-click-dataset-windows.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-all-files.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-all-files.png new file mode 100644 index 000000000..559a2361d Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-all-files.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-files-navigation.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-files-navigation.png new file mode 100644 index 000000000..ee2545ed7 Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/select-files-navigation.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload-done.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload-done.png new file mode 100644 index 000000000..c8d1a497b Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload-done.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload.png b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload.png new file mode 100644 index 000000000..200225d3e Binary files /dev/null and b/multichannel-oci-genaiagents/3-prepare-the-dataset/images/upload.png differ diff --git a/multichannel-oci-genaiagents/3-prepare-the-dataset/prepare-the-dataset.md b/multichannel-oci-genaiagents/3-prepare-the-dataset/prepare-the-dataset.md new file mode 100644 index 000000000..6048499e8 --- /dev/null +++ b/multichannel-oci-genaiagents/3-prepare-the-dataset/prepare-the-dataset.md @@ -0,0 +1,81 @@ +# Prepare the dataset + +## Introduction + +This lab will walk you through how to upload data to be ingested and indexed by the OCI Generative AI Agents service. +The dataset is the fuel for the service. After the data has been indexed, you will be able to ask complex questions about it and have the service answer those questions like a human would. +In this lab, you will be using a dataset we have created for you which contains parts of the OCI Generative AI service documentation. This will allow the Agent to answer user questions about the service. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + +* Create a storage bucket to store the dataset. +* Upload the dataset to the storage bucket. + +### Prerequisites + +This lab assumes you have: + +* An Oracle Cloud account +* All previous labs successfully completed + +## Task 1: Create a storage bucket & upload the dataset + +1. On your OCI tenancy console, click the **Navigation Menu**. + +1. Click **Storage**. + +1. Click **Buckets** on the right, under **Object Storage & Archive Storage**. + + ![Buckets navigation](./images/buckets-navigation.png) + +1. Under **List scope**, make sure that the **root** compartment is selected. + +1. Click the **Create Bucket** button on the top of the **Buckets** table. + + ![Buckets navigation](./images/buckets-list.png) + +1. Provide a name for the bucket (example: oci-generative-ai-agents-service-cw24-hol-dataset). + +1. For the purpose of this workshop, we are going to accepts the default values for the rest of the form. + + Click the **Create** button on the bottom of the panel. + + ![Create bucket](./images/create-bucket.png) + +1. Click the new bucket's name in the **Buckets** table. + + ![Select bucket](./images/bucket-navigation.png) + +1. Under the **Objects** section of the page, click the **Upload** button. + +1. Click the **select files** link in the **Choose Files from your Computer** section. + + ![Select files navigation](./images/select-files-navigation.png) + +1. In your `File Explorer` or `Finder`, navigate to the folder containing all of the `.txt` files extracted in the previous task. + +1. Select all of the files from the folder and click `Open`. + + ![Select all files](./images/select-all-files.png) + +1. Click the **Upload** button at the bottom of the panel. + + ![Upload files](./images/upload.png) + +1. Click the **Close** button at the bottom of the panel. + + ![Upload done](./images/upload-done.png) + +If everything went to plan, you should see all of the files listed under the **Objects** section of the page. + +![Objects list](./images/objects-list.png) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/create-a-knowledge-base.md b/multichannel-oci-genaiagents/4-create-a-knowledge-base/create-a-knowledge-base.md new file mode 100644 index 000000000..c57d00028 --- /dev/null +++ b/multichannel-oci-genaiagents/4-create-a-knowledge-base/create-a-knowledge-base.md @@ -0,0 +1,106 @@ +# Create a knowledge base + +## Introduction + +In this lab we are going to create a knowledge base which will consist of the text files we've uploaded to the storage bucket. +A Knowledge Base maintains information about the Data Source (where the data comes from, for example, text files in storage bucket) as well as metadata for accessing that data and in certain cases, it would also manage indexing and vector storage for the data. +Creating a Knowledge Base lets the OCI Generative AI Agents service know where our data is stored and what format it is stored in. Using this information, the service will be able to ingest the data, understand it and index it for fast retrieval later. +A single knowledge base can be used for multiple Agents. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + +* Create a knowledge base. +* Monitor the ingestion job. + +### Prerequisites + +This lab assumes you have: + +* An Oracle Cloud account +* All previous labs successfully completed + +## Task 1: Create a knowledge base + +1. From the OCI Generative AI Agents service overview page, click the **Knowledge Bases** link on the left. + +1. Make sure that the root compartment is selected in the **Compartment** list under the **List scope** section on the left. + +1. Click the **Create knowledge base** button at the top of the **Knowledge bases** table. + + ![Agents service navigation](./images/knowledge-base-navigation.png) + +1. Provide a name for the Knowledge base (for example: oci-generative-ai-agents-cw24-hol-kb) + +1. Make sure that the root compartment is selected in the **Compartment** list. + +1. Make sure that the **Object storage** option is selected in the **Select data store** list. + +1. Click the **Specify data source** button at the top of the **Data sources** table. + +1. In the **Specify data source** pane, provide a name for the data source (for example: oci-generative-ai-agents-cw24-hol-ds) + +1. In the **Data bucket** section, make sure that the root compartment is selected. If not, click the **Change compartment** link and select the root compartment. + +1. Select the storage bucket into which you've uploaded the dataset text files in the previous lab. + +1. Select the **Select all in bucket** option from the **Object prefixes** list. + +1. Click the **Create** button at the bottom of the pane. + + ![Create knowledge base and data source](./images/create-data-source.png) + +1. Make sure that the **Automatically start ingestion job for above data sources** option is checked. + +1. Click the **Create** button at the bottom of the page. + + ![Knowledge base](./images/create-knowledge-base.png) + +If everything went to plan, your Knowledge Base will be created. This can take a few minutes. + +Please wait until the **Lifecycle state** shows the **Active** state before moving on to the next lab. + + ![Knowledge base created](./images/knowledge-base-created.png) + + ![Knowledge base active](./images/knowledge-base-active.png) + +## Task 2: Monitor the ingestion job + +When a Knowledge Base is created, an Ingestion Job is automatically created in order to process the information contained in the Data Source. +This job might take a while to complete and the Knowledge Base will not be set to Active until it is done. +Here are the steps to monitor the Ingestion Job's progress. + +1. From the OCI Generative AI Agents service overview page, click the **Knowledge Bases** link on the left. + +1. Make sure that the root compartment is selected in the **Compartment** list under the **List scope** section on the left. + +1. Click the Knowledge Base we've just created in the previous task in the **Knowledge bases** table. + + ![Knowledge base navigation](./images/view-knowledge-base-navigation.png) + +1. In the Knowledge Base details page, click the Data Source we've created in the previous task in the **Data sources** table. + + ![Data source navigation](./images/data-source-navigation.png) + +1. In the Data Source details page, click the Ingestion Job which was automatically created (should only be one) in the **Ingestion jobs** table. + + ![Ingestion job navigation](./images/ingestion-job-navigation.png) + +1. In the Ingestion job details page you'll be able to see the job progress under the **Work requests** table by observing the **State** and **Percent complete** columns. Initially the it will look like this: + + ![Ingestion job navigation](./images/ingestion-job-details.png) + + When the Ingestion Job is complete, it should look this: + + ![Ingestion job navigation](./images/ingestion-job-completed.png) + + As you can see, details page displays information such as: **Number of ingested files**, **Number of failed files**, **Job duration** and more. When the job is complete, the **Percent complete** column should show 100% and the **State** column should indicate **Succeeded**. At this point you can continue to the next lab. + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-data-source.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-data-source.png new file mode 100644 index 000000000..54c652232 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-data-source.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-knowledge-base.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-knowledge-base.png new file mode 100644 index 000000000..ac2bc2df4 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/create-knowledge-base.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/data-source-navigation.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/data-source-navigation.png new file mode 100644 index 000000000..b716d335f Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/data-source-navigation.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-completed.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-completed.png new file mode 100644 index 000000000..9ccf82cd8 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-completed.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-details.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-details.png new file mode 100644 index 000000000..5a005a889 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-details.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-navigation.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-navigation.png new file mode 100644 index 000000000..6334a4001 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/ingestion-job-navigation.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-active.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-active.png new file mode 100644 index 000000000..d01b3a818 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-active.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-created.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-created.png new file mode 100644 index 000000000..98270ebb8 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-created.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-navigation.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-navigation.png new file mode 100644 index 000000000..8a12d6c42 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/knowledge-base-navigation.png differ diff --git a/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/view-knowledge-base-navigation.png b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/view-knowledge-base-navigation.png new file mode 100644 index 000000000..b494316e8 Binary files /dev/null and b/multichannel-oci-genaiagents/4-create-a-knowledge-base/images/view-knowledge-base-navigation.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/create-an-agent.md b/multichannel-oci-genaiagents/5-create-an-agent/create-an-agent.md new file mode 100644 index 000000000..65c80ef20 --- /dev/null +++ b/multichannel-oci-genaiagents/5-create-an-agent/create-an-agent.md @@ -0,0 +1,79 @@ +# Create an Agent + +## Introduction + +In the OCI Generative AI Agents service, an Agent holds a set of configuration which specifies the knowledge bases (can be more than one), a greeting preamble (more on this later) and an endpoint (a network access point which allows for communication with the Agent). +After an Agent is created, you will be able to chat with it and also make API calls against it's endpoint. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + +* Create an Agent. +* Observe Agent metadata. + +### Prerequisites + +This lab assumes you have: + +* An Oracle Cloud account +* All previous labs successfully completed + +## Task 1: Create an Agent + +1. From the OCI Generative AI Agents service overview page, click the **Agents** link on the left. + +1. Make sure that the root compartment is selected in the **Compartment** list under the **List scope** section on the left. + +1. Click the **Create agent** button at the top of the **Agents** table. + + ![Create Agent navigation](./images/create-agent-navigation.png) + +1. Provide a name for the Agent (for example: oci-generative-ai-agents-cw24-hol-agent) + +1. Make sure that the root compartment is selected in the **Compartment** list. + +1. Optionally, provide a **Welcome message** for the Agent to display at the start of a new conversation (also called the `Preamble`, for example: Hello, i'm the OCI Generative AI documentation helper! How can i help you today?). + +1. Under the **Add knowledge bases** section, make sure that the root compartment is selected in the **Compartments** list. + +1. Check the box next to the knowledge base we have created in the previous lab to let the Agent know it should interact with the data specified in the knowledge base. + +1. Make sure that the **Automatically create an endpoint for this agent** option is checked. + +1. Click the **Create** button at the bottom of the page. + + ![Create knowledge base and data source](./images/create-agent.png) + +If everything went to plan, your Agent will be created. This can take a few minutes. + +Please wait until the **Lifecycle state** shows the **Active** state before moving on to the next lab. + + ![Agent being created](./images/agent-creating.png) + + ![Agent created](./images/agent-created.png) + +## Task 2: Observe Agent metadata + +1. After the Agent was created and we've confirmed that it's **Lifecycle state** is **Active**, click the Agent name in the **Agents** table. + + ![View Agent navigation](./images/view-agent-navigation.png) + +1. Notable information on the Agent details page: + + 1. Agent **name**. + 2. Agent **OCID**. + 3. **Knowledge Bases** associated with the Agent. + 4. **Endpoints** which can be used to access the Agent programmatically (a default one was created when we created the Agent). + + > In addition, you can **Launch a chat** session with the Agent as well as **Edit**, **Move** and **Delete** the Agent. + + ![Agent details](./images/agent-details.png) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 \ No newline at end of file diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/agent-created.png b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-created.png new file mode 100644 index 000000000..dae75991d Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-created.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/agent-creating.png b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-creating.png new file mode 100644 index 000000000..8282bc790 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-creating.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/agent-details.png b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-details.png new file mode 100644 index 000000000..4a05944f8 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/agent-details.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent-navigation.png b/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent-navigation.png new file mode 100644 index 000000000..1cb9f3578 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent-navigation.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent.png b/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent.png new file mode 100644 index 000000000..66f2fd4d7 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/create-agent.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/first-answer-citations.png b/multichannel-oci-genaiagents/5-create-an-agent/images/first-answer-citations.png new file mode 100644 index 000000000..bb6d93846 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/first-answer-citations.png differ diff --git a/multichannel-oci-genaiagents/5-create-an-agent/images/view-agent-navigation.png b/multichannel-oci-genaiagents/5-create-an-agent/images/view-agent-navigation.png new file mode 100644 index 000000000..d2d270964 Binary files /dev/null and b/multichannel-oci-genaiagents/5-create-an-agent/images/view-agent-navigation.png differ diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/have-a-conversation-with-your-data.md b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/have-a-conversation-with-your-data.md new file mode 100644 index 000000000..b2a373407 --- /dev/null +++ b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/have-a-conversation-with-your-data.md @@ -0,0 +1,67 @@ +# Have a conversation with your data + +## Introduction + +Our provisioning work is now complete. It is now time to enjoy the fruits of our labor. +In the previous labs we have created the proper permissions, uploaded our dataset to a storage bucket, created a knowledge base which drives it's data from the text files in our dataset and lastly, we've created an Agent which will interact with the data defined in the knowledge base. +Part of the Agent creation process, the OCI Generative AI Agents service created an Endpoint. We will now use this endpoint to communicate with the Agent. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + +* Use the Agent to answer questions about our data. + +### Prerequisites + +This lab assumes you have: + +* An Oracle Cloud account +* All previous labs successfully completed + +## Task 1: Have a conversation with your data + +1. Click the **Chat** link on the left. + +1. Make sure that the root compartment is selected in the **Agent's compartment** list on the top of the page. + +1. Select the Agent we have created in the previous lab in the **Agent** list. + +1. Select the **Agent endpoint** (there should be only one which was automatically created for us when the Agent was created). + +1. You should see the greeting message we've entered during the Agent creation displayed in the chat section. + + ![Start chatting](./images/start-chat.png) + +1. At this point, we can start asking our Agent questions about the data we provided and get intelligent answers as well as references to the original data that the answers are based on. + +Let's type the following question into the chat: "How can i create a fine-tuned model?" and click the **Submit** button. + + ![Ask first question](./images/ask-first-question.png) + +1. The Agent will scan the data for relevant information and compose a similar to the following: + + ![First answer](./images/first-answer.png) + +1. The Agent can also provide a direct reference to the data in our dataset where the answer was extracted from. + + Scroll down to the end of the answer text and click **View citations** to expand the citations section. This section will provide one or more references to our text files which include a link to the file and an excerpt from the file where the answer was extracted from. + + Providing citations makes sure that the Agent bases it's responses on our data and decreases the chances for hallucinations or made up answers. + + ![First answer citations](./images/first-answer-citations.png) + + In addition to citations, you can also observe the log section on the right of the screen to which search query the Agent is using as well as which data files were found to have relevant answers and the text generated for the response. + + ![Logs](./images/logs.png) + +Feel free to experiment and ask the Agent additional questions related to your uploaded document. + + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/ask-first-question.png b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/ask-first-question.png new file mode 100644 index 000000000..7ef4e2a9c Binary files /dev/null and b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/ask-first-question.png differ diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer-citations.png b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer-citations.png new file mode 100644 index 000000000..bb6d93846 Binary files /dev/null and b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer-citations.png differ diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer.png b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer.png new file mode 100644 index 000000000..90912fd42 Binary files /dev/null and b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/first-answer.png differ diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/logs.png b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/logs.png new file mode 100644 index 000000000..ee6ee711b Binary files /dev/null and b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/logs.png differ diff --git a/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/start-chat.png b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/start-chat.png new file mode 100644 index 000000000..e1683d464 Binary files /dev/null and b/multichannel-oci-genaiagents/6-have-a-conversation-with-your-data/images/start-chat.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/connect-oda-with-agents.md b/multichannel-oci-genaiagents/7-connect-oda-with-agents/connect-oda-with-agents.md new file mode 100644 index 000000000..18b7c2090 --- /dev/null +++ b/multichannel-oci-genaiagents/7-connect-oda-with-agents/connect-oda-with-agents.md @@ -0,0 +1,154 @@ +# Exposing Gen AI Agents via Oracle Digital Assistant + +## Introduction + +In this lab, we will learn how to expose Gen AI Agents via Oracle Digital Assistant (ODA). This involves retrieving the Agent Endpoint ID and configuring the Digital Assistant to interact with the Gen AI Agent. + +Estimated Time: 20 minutes + +### Objectives + +In this lab, you will: +- Retrieve the Agent Endpoint ID. +- Configure a Digital Assistant to interact with the Gen AI Agent. + +### Prerequisites + +This lab assumes you have: +- Access to the Gen AI Agent dashboard. +- An Oracle Digital Assistant instance. + +## Task 1: Retrieve the Agent Endpoint ID + +1. Open your Gen AI Agent dashboard and go to the **Agents** tab. + + ![Gen AI Agent Dashboard](./images/image1.png) + +2. Click on the agent you created and select the **Endpoints** tab on the left-hand side. + + ![Endpoints Tab](./images/image2.png) + +3. In the **Endpoints** tab, you will see the list of endpoints. + + ![Endpoints List](./images/image3.png) + +4. Click on the endpoint to open it. + + ![Open Endpoint](./images/image4.png) + +5. The OCID value shown here will be your **agentEndpointId** which we will use in the next steps. + +## Task 2: Configure a Digital Assistant + +1. Open your Oracle Digital Assistant instance. +2. Click the hamburger menu on the left-hand corner and go to **Settings > API Services**. + + ![API Services Navigation](./images/image5.png) + +3. Press the **Add Services** button to add the REST service to connect to the Gen AI Agent from ODA. + + ![Add Services](./images/image6.png) + + > **Note:** We need two APIs: + > 1. Create a Session API (called once to get the session ID). + > 2. Execute API (to get the response from Gen AI Agents using the session ID and agent ID). + +4. Let's create the Create Session API first. Fill in the details and press **Create**. + + - **Name:** genAiAgentCreateSession + - **Endpoint:** `https://agent-runtime.generativeai.us-chicago-1.oci.oraclecloud.com/20240531/agentEndpoints/{agentEndpointId}/sessions` + - **Method:** POST + + ![Create Session API](./images/image7.png) + +5. Once the API is created, select **OCI Resource Principal** as the authentication type. + + ![Select Authentication Type](./images/image8.png) + + In the **Body** section, add: + ```json + { + "idleTimeoutInSeconds": 3600 + } + ``` + + In the **Parameters** section, add the parameter `agentEndpointId`. + + ![Add Parameters](./images/image9.png) + + You can test the API using the **Test Request** button and note the id(session Id) which we will use to test the other api. + + ![Test Request](./images/image10.png) + + You can also download the API from [here](https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/genAiAgents/o/RESTService-genAiAgentCreateSession.yaml) and import it. + +6. Now, let's create the Execute API. Fill in the details and press **Create**. + + - **Name:** genAiAgentExecute + - **Endpoint:** `https://agent-runtime.generativeai.us-chicago-1.oci.oraclecloud.com/20240531/agentEndpoints/{agentEndpointId}/sessions/{sessionId}/actions/execute` + - **Method:** POST + + ![Create Execute API](./images/image11.png) + +7. Once the API is created, select **OCI Resource Principal** as the authentication type. + + ![Select Authentication Type](./images/image12.png) + + In the **Body** section, add: + ```json + { + "userMessage": "Your question related to the agent created", + "shouldStream": false + } + ``` + + In the **Parameters** section, add the parameters `agentEndpointId` and `sessionId`( which you got from the previous api call). + + ![Add Parameters](./images/image13.png) + + You can test the API using the **Test Request** button. + + ![Test Request](./images/image10.png) + + You can also download the API from [here](https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/genAiAgents/o/RESTService-genAiAgentexecute.yaml) and import it. + +## Task 3: Create a Skill in Digital Assistant + +1. Go to **Development > Skills** from the hamburger menu. + + ![Skills Navigation](./images/image14.png) + +2. Download the skill from [this link](https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/genAiAgents/o/genAiAgentsSkill(1.0).zip). Click on **Import Skills** in the right-hand corner and import the skill named **genAiAgentsSkill**. + +3. Once the skill is imported, open it and go to the **Settings** button on the left-hand side as shown in the image below. + + ![Skill Settings](./images/image15.png) + +4. Go to the **Configuration** tab under Settings. + + ![Configuration Tab](./images/image16.png) + +5. Scroll down to **Custom Parameters**. + + ![Custom Parameters](./images/image17.png) + +6. Select the parameter `agentEndpointId`, press the edit button, and change the value field to your specific endpoint ID. Press **OK**. + + ![Edit Custom Parameter](./images/image18.png) + +7. Now, select the **Preview** button on the top right corner of your screen and type ‘hi’ in the bot tester. You will receive a welcome message. + + ![Bot Tester](./images/image19.png) + +8. Type your question to the agent and get the reply. + +## Learn More + +- [Gen AI Agent Documentation](https://docs.oracle.com/en-us/iaas/Content/genAI/getting-started.htm) +- [Oracle Digital Assistant Documentation](https://docs.oracle.com/en-us/iaas/digital-assistant/getting-started.htm) + +## Acknowledgements + +* **Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image1.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image1.png new file mode 100644 index 000000000..b7f5e3165 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image1.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image10.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image10.png new file mode 100644 index 000000000..3afe2f312 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image10.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image11.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image11.png new file mode 100644 index 000000000..2ede8bf61 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image11.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image12.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image12.png new file mode 100644 index 000000000..e68879330 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image12.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image13.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image13.png new file mode 100644 index 000000000..835b981bf Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image13.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image14.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image14.png new file mode 100644 index 000000000..11f77c11e Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image14.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image15.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image15.png new file mode 100644 index 000000000..965259138 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image15.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image16.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image16.png new file mode 100644 index 000000000..381217cdf Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image16.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image17.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image17.png new file mode 100644 index 000000000..bee71d007 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image17.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image18.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image18.png new file mode 100644 index 000000000..35d954db3 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image18.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image19.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image19.png new file mode 100644 index 000000000..53bbafeea Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image19.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image2.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image2.png new file mode 100644 index 000000000..dc89d37ca Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image2.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image3.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image3.png new file mode 100644 index 000000000..aa8866331 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image3.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image4.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image4.png new file mode 100644 index 000000000..bda86ad14 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image4.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image5.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image5.png new file mode 100644 index 000000000..2f3f88368 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image5.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image6.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image6.png new file mode 100644 index 000000000..01fd421ad Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image6.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image7.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image7.png new file mode 100644 index 000000000..6a29d1c27 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image7.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image8.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image8.png new file mode 100644 index 000000000..937422d95 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image8.png differ diff --git a/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image9.png b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image9.png new file mode 100644 index 000000000..588958982 Binary files /dev/null and b/multichannel-oci-genaiagents/7-connect-oda-with-agents/images/image9.png differ diff --git a/multichannel-oci-genaiagents/8-connect-oda-with-slack/connect-oda-with-slack.md b/multichannel-oci-genaiagents/8-connect-oda-with-slack/connect-oda-with-slack.md new file mode 100644 index 000000000..95fc7c536 --- /dev/null +++ b/multichannel-oci-genaiagents/8-connect-oda-with-slack/connect-oda-with-slack.md @@ -0,0 +1,148 @@ +# Connecting ODA to Slack as a New Application + +## Introduction + +In this lab, we will learn how to connect Oracle Digital Assistant (ODA) to Slack as a new application. This involves creating a Slack app, configuring OAuth scopes, adding the app to a Slack workspace, and creating a channel in ODA to interact with Slack. + +Estimated Time: 30 minutes + +### Objectives + +In this lab, you will: +- Get a Slack workspace. +- Create a Slack app. +- Add OAuth scopes for the Slack app. +- Add the app to the Slack workspace. +- Create a channel in Oracle Digital Assistant. +- Configure the Webhook URL in the Slack app. +- Test your bot in Slack. + +### Prerequisites + +This lab assumes you have: +- Access to a Slack workspace with permissions to create a Slack app. +- An Oracle Digital Assistant instance. + +## Task 1: Get a Slack Workspace + +To make your digital assistant (or standalone bot) available in Slack, you need to have a Slack workspace where you have the necessary permissions to create a Slack app. + +If you don't have such a workspace, you can create your own. See Slack's [Create a new workspace](https://slack.com/create) page. + +## Task 2: Create a Slack App + +1. Go to Slack's [Your Apps](https://api.slack.com/apps) page. +2. Click **Create a Slack App**. +3. In the **Create a Slack App** dialog, fill in the **App Name** and **Development Slack Workspace** fields, and click **Create App**. + + Once the app is created, its Basic Information page appears. + +4. Scroll down to the **App Credentials** section and note the values of the **Client ID**, **Client Secret**, and **Signing Secret**. + + You will need these credentials when you set up the channel in Oracle Digital Assistant. + +## Task 3: Add OAuth Scopes for the Slack App + +You add OAuth scopes for permissions that you want to give to the bot and to the user. + +1. In the left navigation of the web console for your Slack app, within the **Features** section, select **OAuth and Permissions**. +2. Scroll to the **Scopes** section of the page. +3. The scopes fall into these categories: + - **Bot Token Scopes** + - **User Token Scopes** +4. In the **Bot Token Scopes** section, add the scopes that correspond to the bot-level permissions you want to allow. At minimum, the following bot token scopes are required: + - `chat:write` + - `im:history` + - `users:read` + + Depending on the skill's features, other scopes might be required. For example, the following scopes are required for working with attachments: + - `files:read` + - `files:write` + +5. In the **User Token Scopes** section, add the scopes that correspond to the user-level permissions you want to allow. The following user token scopes are required: + - `files:read` + - `files:write` + + Depending on the requirements of your bot, you may need to add other scopes. + +## Task 4: Add the App to the Workspace + +1. Scroll back to the top of the **OAuth & Permissions** page. +2. Within the **OAuth Tokens & Redirect URLs** section, click **Install to Workspace**. + + A page will appear showing what the app will be able to do. + +3. At the bottom of the page, click **Allow**. + + Once you have completed this step, you should be able to see the app in your Slack workspace by selecting **Apps** in the left navigation. + +## Task 5: Create a Channel in Oracle Digital Assistant + +1. In Oracle Digital Assistant, click **Channels** in the left menu and then choose **Users**. +2. Click **+ Channel** to open the **Create Channel** dialog. +3. Give your channel a name. +4. Choose **Slack** as the channel type. +5. Fill in the values for **Client ID**, **Client Secret**, and **Signing Secret** that you obtained when you created your Slack app. + + You can retrieve these values from the Settings page of your Slack app. + +6. If you are setting up the channel for group chats and you want messages to go to the group without mentioning the Slack app name, select **Allow Messages Without App Mention in Group Chat**. +7. Click **Create**. +8. In the Channels page, copy the WebHook URL and paste it somewhere convenient on your system. You’ll need this to finish setting up the Slack app. +9. Click the ![Route To ... dropdown icon](./images/image1.png) and select the digital assistant or skill that you want to associate with the channel. +10. In the **Route To** dropdown, select the digital assistant or skill that you want to associate with the channel. +11. Switch on the **Channel Enabled** control. + +## Task 6: Configure the Webhook URL in the Slack App + +1. In the left navigation of the web console for your Slack app, select **Interactivity & Shortcuts**. + ![Interactivity](./images/image2.png) +2. Turn the **Interactivity** switch ON. +3. In both the **Request URL** and **Options Load URL** fields, paste the webhook URL that was generated when you created the channel in Oracle Digital Assistant. +4. Click **Save Changes**. +5. In the left navigation, select **OAuth & Permissions**. +6. In the **Redirect URLs** field, click **Add New Redirect URL**. +7. Paste the webhook URL, append `/authorizeV2`, and click **Add**. +8. Click **Save URLs**. +9. In the left navigation, select **App Home**. +10. In the **Your App’s Presence in Slack** section, turn on the **Always Show My Bot as Online** switch. +11. Scroll down the page to the **Show Tabs** section, and turn the **Messages Tab** switch on. +12. Select the **Allow users to send Slash commands and messages from the messages tab** checkbox. +13. In the left navigation, select **Event Subscriptions**. +14. Set the **Enable Events** switch to ON. +15. In the **Request URL** field, paste the webhook URL. + + After you enter the URL, a green **Verified** label should appear next to the Request URL label. + +16. Expand the **Subscribe to bot events** section of the page, click **Add Bot User Event**, and add the following event: + - `message.im` + + If you plan to make the bot available in [group chats](https://docs.oracle.com/en/cloud/paas/digital-assistant/use-chatbot/group-chats.html#GUID-5C38EC0E-1D13-4BE0-BB92-735C9B53C097), also add the following events: + - `app_mention` + - `message.mpim` + - `message.channels` + +17. Click **Save Changes**. +18. In the left navigation, select **Manage Distribution**. +19. Click the **Add to Slack** button and then click **Allow**. + + At this point, you should get the message **You've successfully installed your App in Slack**. + +## Task 7: Test Your Bot in Slack + +With the Slack channel and messaging configuration complete, you can test your bot (digital assistant or skill) in Slack. + +1. Open the Slack workspace where you have installed the app. +2. In the left navigation bar, select the app that is associated with your digital assistant. +3. In the **Message** field, enter text to start communicating with the digital assistant. + +## Learn More + +- [Oracle Digital Assistant Documentation](https://docs.oracle.com/en/cloud/paas/digital-assistant/index.html) +- [Slack API Documentation](https://api.slack.com/start) + +## Acknowledgements + +**Author** - Anshuman Panda, Principal Generative AI Specialist, Alexandru Negrea, AI and App Integration Specialist Leader + +* **Last Updated By/Date** - Anshuman Panda, Principal Generative AI Specialist, Aug 2024 diff --git a/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image1.png b/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image1.png new file mode 100644 index 000000000..f75f8c607 Binary files /dev/null and b/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image1.png differ diff --git a/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image2.png b/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image2.png new file mode 100644 index 000000000..cb905cfcc Binary files /dev/null and b/multichannel-oci-genaiagents/8-connect-oda-with-slack/images/image2.png differ diff --git a/multichannel-oci-genaiagents/workshops/sandbox/index.html b/multichannel-oci-genaiagents/workshops/sandbox/index.html new file mode 100644 index 000000000..aaac634be --- /dev/null +++ b/multichannel-oci-genaiagents/workshops/sandbox/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/multichannel-oci-genaiagents/workshops/sandbox/manifest.json b/multichannel-oci-genaiagents/workshops/sandbox/manifest.json new file mode 100644 index 000000000..cc25ea63c --- /dev/null +++ b/multichannel-oci-genaiagents/workshops/sandbox/manifest.json @@ -0,0 +1,46 @@ +{ + "workshoptitle": "Build an AI assistant you can access through Slack, Teams and more", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Overview and Highlights", + "filename": "../../1-overview-and-highlights/overview-and-highlights.md" + }, + { + "title": "Get Started", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "Setup", + "filename": "../../2-setup/setup.md" + }, + { + "title": "LAB 1: Prepare the dataset", + "filename": "../../3-prepare-the-dataset/prepare-the-dataset.md" + }, + { + "title": "LAB 2: Create a Knowledge Base", + "filename": "../../4-create-a-knowledge-base/create-a-knowledge-base.md" + }, + { + "title": "LAB 3: Create an Agent", + "filename": "../../5-create-an-agent/create-an-agent.md" + }, + { + "title": "LAB 4: Have a conversation with your data", + "filename": "../../6-have-a-conversation-with-your-data/have-a-conversation-with-your-data.md" + }, + { + "title": "LAB 5: Connect ODA with Gen AI Agents", + "filename": "../../7-connect-oda-with-agents/connect-oda-with-agents.md" + }, + { + "title": "LAB 6: Connect ODA with Slack", + "filename": "../../8-connect-oda-with-slack/connect-oda-with-slack.md" + }, + { + "title": "Need Help?", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +} \ No newline at end of file diff --git a/multichannel-oci-genaiagents/workshops/tenancy/index.html b/multichannel-oci-genaiagents/workshops/tenancy/index.html new file mode 100644 index 000000000..aaac634be --- /dev/null +++ b/multichannel-oci-genaiagents/workshops/tenancy/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/multichannel-oci-genaiagents/workshops/tenancy/manifest.json b/multichannel-oci-genaiagents/workshops/tenancy/manifest.json new file mode 100644 index 000000000..cc25ea63c --- /dev/null +++ b/multichannel-oci-genaiagents/workshops/tenancy/manifest.json @@ -0,0 +1,46 @@ +{ + "workshoptitle": "Build an AI assistant you can access through Slack, Teams and more", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Overview and Highlights", + "filename": "../../1-overview-and-highlights/overview-and-highlights.md" + }, + { + "title": "Get Started", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "Setup", + "filename": "../../2-setup/setup.md" + }, + { + "title": "LAB 1: Prepare the dataset", + "filename": "../../3-prepare-the-dataset/prepare-the-dataset.md" + }, + { + "title": "LAB 2: Create a Knowledge Base", + "filename": "../../4-create-a-knowledge-base/create-a-knowledge-base.md" + }, + { + "title": "LAB 3: Create an Agent", + "filename": "../../5-create-an-agent/create-an-agent.md" + }, + { + "title": "LAB 4: Have a conversation with your data", + "filename": "../../6-have-a-conversation-with-your-data/have-a-conversation-with-your-data.md" + }, + { + "title": "LAB 5: Connect ODA with Gen AI Agents", + "filename": "../../7-connect-oda-with-agents/connect-oda-with-agents.md" + }, + { + "title": "LAB 6: Connect ODA with Slack", + "filename": "../../8-connect-oda-with-slack/connect-oda-with-slack.md" + }, + { + "title": "Need Help?", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +} \ No newline at end of file