diff --git a/NOTEBOOKS/intro_function_calling.ipynb b/NOTEBOOKS/intro_function_calling.ipynb
new file mode 100644
index 0000000..1ba226c
--- /dev/null
+++ b/NOTEBOOKS/intro_function_calling.ipynb
@@ -0,0 +1,1166 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VEqbX8OhE8y9"
+ },
+ "source": [
+ "# Intro to Function Calling with the Gemini API & Python SDK"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CkHPv2myT2cx"
+ },
+ "source": [
+ "## Overview\n",
+ "\n",
+ "### Gemini\n",
+ "\n",
+ "Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases.\n",
+ "\n",
+ "### Calling functions from Gemini\n",
+ "\n",
+ "[Function Calling](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling) in Gemini lets developers create a description of a function in their code, then pass that description to a language model in a request. The response from the model includes the name of a function that matches the description and the arguments to call it with.\n",
+ "\n",
+ "### Why function calling?\n",
+ "\n",
+ "When working with generative text models, it can be difficult to coerce generative models to give consistent outputs in a structured format such as JSON. Function Calling in Gemini allows you to overcome this limitation by forcing the model to output structured data in the format and schema that you define.\n",
+ "\n",
+ "You can think of Function Calling as a way to get structured output from user prompts and function definitions, use that structured output to make an API request to an external system, then return the function response to the generative model so that it can generate a natural language summary. In other words, function calling in Gemini helps you go from unstructured text in prompt, to a structured data object, and back to natural language again."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "DrkcqHrrwMAo"
+ },
+ "source": [
+ "### Objectives\n",
+ "\n",
+ "In this tutorial, you will learn how to use the Vertex AI Gemini API with the Vertex AI SDK for Python to make function calls via the Gemini 1.0 Pro (`gemini-1.0-pro`) model.\n",
+ "\n",
+ "You will complete the following tasks:\n",
+ "\n",
+ "- Install the Vertex AI SDK for Python\n",
+ "- Use the Vertex AI Gemini API to interact with the Gemini 1.0 Pro (`gemini-1.0-pro`) model:\n",
+ " - Generate function calls from a text prompt to get the weather for a given location\n",
+ " - Generate function calls from a text prompt and call an external API to geocode addresses\n",
+ " - Generate function calls from a chat prompt to help retail users"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "C9nEPojogw-g"
+ },
+ "source": [
+ "### Costs\n",
+ "\n",
+ "This tutorial uses billable components of Google Cloud:\n",
+ "\n",
+ "- Vertex AI\n",
+ "\n",
+ "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "r11Gu7qNgx1p"
+ },
+ "source": [
+ "## Getting Started\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "No17Cw5hgx12"
+ },
+ "source": [
+ "### Install Vertex AI SDK for Python\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "tFy3H3aPgx12",
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "!pip3 install --upgrade --user google-cloud-aiplatform"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "R5Xep4W9lq-Z"
+ },
+ "source": [
+ "### Restart current runtime\n",
+ "\n",
+ "To use the newly installed packages in this Jupyter runtime, you must restart the runtime. You can do this by running the cell below, which will restart the current kernel."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "XRvKdaPDTznN",
+ "outputId": "154a71b5-f302-4f53-ed2f-b3e5fef9195b",
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Restart kernel after installs so that your environment can access the new packages\n",
+ "import IPython\n",
+ "\n",
+ "app = IPython.Application.instance()\n",
+ "app.kernel.do_shutdown(True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SbmM4z7FOBpM"
+ },
+ "source": [
+ "
\n",
+ "⚠️ The kernel is going to restart. Please wait until it is finished before continuing to the next step. ⚠️\n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dmWOrTJ3gx13"
+ },
+ "source": [
+ "### Authenticate your notebook environment (Colab only)\n",
+ "\n",
+ "If you are running this notebook on Google Colab, run the following cell to authenticate your environment. This step is not required if you are using [Vertex AI Workbench](https://cloud.google.com/vertex-ai-workbench)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "NyKGtVQjgx13",
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "import sys\n",
+ "\n",
+ "if \"google.colab\" in sys.modules:\n",
+ " from google.colab import auth\n",
+ "\n",
+ " auth.authenticate_user()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "DF4l8DTdWgPY"
+ },
+ "source": [
+ "### Set Google Cloud project information and initialize Vertex AI SDK\n",
+ "\n",
+ "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
+ "\n",
+ "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "Nqwi-5ufWp_B",
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
+ "LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
+ "\n",
+ "import vertexai\n",
+ "\n",
+ "vertexai.init(project=PROJECT_ID, location=LOCATION)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Code Examples"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "jXHfaVS66_01"
+ },
+ "source": [
+ "### Import libraries\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "id": "lslYAvw37JGQ",
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "import requests\n",
+ "from vertexai.generative_models import (\n",
+ " Content,\n",
+ " FunctionDeclaration,\n",
+ " GenerationConfig,\n",
+ " GenerativeModel,\n",
+ " Part,\n",
+ " Tool,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Chat example: Using Function Calling in a chat session to answer user's questions about the Google Store"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this example, you'll use Function Calling along with the chat modality in the Gemini model to help customers get information about products in the Google Store.\n",
+ "\n",
+ "You'll start by defining three functions: one to get product information, another to get the location of the closest stores, and one more to place an order:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "get_product_info = FunctionDeclaration(\n",
+ " name=\"get_product_info\",\n",
+ " description=\"Get the stock amount and identifier for a given product\",\n",
+ " parameters={\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"product_name\": {\"type\": \"string\", \"description\": \"Product name\"}\n",
+ " },\n",
+ " },\n",
+ ")\n",
+ "\n",
+ "get_store_location = FunctionDeclaration(\n",
+ " name=\"get_store_location\",\n",
+ " description=\"Get the location of the closest store\",\n",
+ " parameters={\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"Location\"}},\n",
+ " },\n",
+ ")\n",
+ "\n",
+ "place_order = FunctionDeclaration(\n",
+ " name=\"place_order\",\n",
+ " description=\"Place an order\",\n",
+ " parameters={\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"product\": {\"type\": \"string\", \"description\": \"Product name\"},\n",
+ " \"address\": {\"type\": \"string\", \"description\": \"Shipping address\"},\n",
+ " },\n",
+ " },\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that function parameters are specified as a Python dictionary in accordance with the [OpenAPI JSON schema format](https://spec.openapis.org/oas/v3.0.3#schemawr).\n",
+ "\n",
+ "Define a tool that allows the Gemini model to select from the set of 3 functions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "retail_tool = Tool(\n",
+ " function_declarations=[\n",
+ " get_product_info,\n",
+ " get_store_location,\n",
+ " place_order,\n",
+ " ],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now you can initialize the Gemini model with Function Calling in a multi-turn chat session.\n",
+ "\n",
+ "You can specify the `tools` kwarg when initializing the model to avoid having to send this kwarg with every subsequent request:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "model = GenerativeModel(\n",
+ " \"gemini-1.0-pro-001\",\n",
+ " generation_config=GenerationConfig(temperature=0),\n",
+ " tools=[retail_tool],\n",
+ ")\n",
+ "chat = model.start_chat()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We're ready to chat! Let's start the conversation by asking if a certain product is in stock:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "function_call {\n",
+ " name: \"get_product_info\"\n",
+ " args {\n",
+ " fields {\n",
+ " key: \"product_name\"\n",
+ " value {\n",
+ " string_value: \"Pixel 8 Pro\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "prompt = \"\"\"\n",
+ "Do you have the Pixel 8 Pro in stock?\n",
+ "\"\"\"\n",
+ "\n",
+ "response = chat.send_message(prompt)\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The response from the Gemini API consists of a structured data object that contains the name and parameters of the function that Gemini selected out of the available functions.\n",
+ "\n",
+ "Since this notebook focuses on the ability to extract function parameters and generate function calls, you'll use mock data to feed synthetic responses back to the Gemini model rather than sending a request to an API server (not to worry, we'll make an actual API call in a later example!):"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Here you can use your preferred method to make an API request and get a response.\n",
+ "# In this example, we'll use synthetic data to simulate a payload from an external API response.\n",
+ "\n",
+ "api_response = {\"sku\": \"GA04834-US\", \"in_stock\": \"yes\"}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In reality, you would execute function calls against an external system or database using your desired client library or REST API.\n",
+ "\n",
+ "Now, you can pass the response from the (mock) API request and generate a response for the end user:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "text: \"Yes, we have the Pixel 8 Pro in stock.\""
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = chat.send_message(\n",
+ " Part.from_function_response(\n",
+ " name=\"get_product_sku\",\n",
+ " response={\n",
+ " \"content\": api_response,\n",
+ " },\n",
+ " ),\n",
+ ")\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, the user might ask where they can buy a different phone from a nearby store:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "function_call {\n",
+ " name: \"get_store_location\"\n",
+ " args {\n",
+ " fields {\n",
+ " key: \"location\"\n",
+ " value {\n",
+ " string_value: \"Mountain View, CA\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 12,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "prompt = \"\"\"\n",
+ "What about the Pixel 8? Is there a store in\n",
+ "Mountain View, CA that I can visit to try one out?\n",
+ "\"\"\"\n",
+ "\n",
+ "response = chat.send_message(prompt)\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Again, you get a response with structured data. This time, the Gemini model selected the `get_store_location` function.\n",
+ "\n",
+ "Now you can build another synthetic payload that would come from an external API:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Here you can use your preferred method to make an API request and get a response.\n",
+ "# In this example, we'll use synthetic data to simulate a payload from an external API response.\n",
+ "\n",
+ "api_response = {\"store\": \"2000 N Shoreline Blvd, Mountain View, CA 94043, US\"}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Again, you can pass the response from the (mock) API request back to the Gemini model:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "function_call {\n",
+ " name: \"get_product_info\"\n",
+ " args {\n",
+ " fields {\n",
+ " key: \"product_name\"\n",
+ " value {\n",
+ " string_value: \"Pixel 8\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = chat.send_message(\n",
+ " Part.from_function_response(\n",
+ " name=\"get_store_location\",\n",
+ " response={\n",
+ " \"content\": api_response,\n",
+ " },\n",
+ " ),\n",
+ ")\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Wait a minute! Why did the Gemini API respond with a second function call rather than a natural language summary? Look closely at the prompt that you used in this conversation turn a few cells up, and you'll notice that the user asked about a product -and- the location of a store.\n",
+ "\n",
+ "In cases like this when two or more functions are defined, the Gemini model might sometimes return back-to-back function call responses within a single conversation turn. This is expected behavior since the Gemini model predicts which functions it should call at runtime so that it can gather enough information to generate a natural language response.\n",
+ "\n",
+ "Not to worry, you can repeat the same steps as before and build another synthetic payload that would come from an external API:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Here you can use your preferred method to make an API request and get a response.\n",
+ "# In this example, we'll use synthetic data to simulate a payload from an external API response.\n",
+ "\n",
+ "api_response = {\"sku\": \"GA08475-US\", \"in_stock\": \"yes\"}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And you can pass the response from the (mock) API request back to the Gemini model:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "text: \"Yes, we have the Pixel 8 in stock. There is a store in Mountain View, CA at 2000 N Shoreline Blvd, Mountain View, CA 94043, US where you can try one out.\""
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = chat.send_message(\n",
+ " Part.from_function_response(\n",
+ " name=\"get_product_info\",\n",
+ " response={\n",
+ " \"content\": api_response,\n",
+ " },\n",
+ " ),\n",
+ ")\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Nice work!\n",
+ "\n",
+ "Within a single conversation turn, the Gemini model requested 2 function calls in a row before returning a natural language summary. In reality, you might follow this pattern if you need to make an API call to an inventory management system, and another call to a store location database, customer management system, or document repository.\n",
+ "\n",
+ "Finally, the user might ask to order a phone and have it shipped to their address:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "function_call {\n",
+ " name: \"place_order\"\n",
+ " args {\n",
+ " fields {\n",
+ " key: \"product\"\n",
+ " value {\n",
+ " string_value: \"Pixel 8 Pro\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"address\"\n",
+ " value {\n",
+ " string_value: \"1155 Borregas Ave, Sunnyvale, CA 94089\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 17,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "prompt = \"\"\"\n",
+ "I'd like to order a Pixel 8 Pro and have it shipped to 1155 Borregas Ave, Sunnyvale, CA 94089.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = chat.send_message(prompt)\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Perfect! The Gemini model extracted the user's selected product and their address. Now you can call an API to place the order:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# This is where you would make an API request to return the status of their order.\n",
+ "# Use synthetic data to simulate a response payload from an external API.\n",
+ "\n",
+ "api_response = {\n",
+ " \"payment_status\": \"paid\",\n",
+ " \"order_number\": 12345,\n",
+ " \"est_arrival\": \"2 days\",\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And send the payload from the external API call so that the Gemini API returns a natural language summary to the end user."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "text: \"OK. I have placed an order for a Pixel 8 Pro and it will be shipped to 1155 Borregas Ave, Sunnyvale, CA 94089. You can expect delivery in 2 days. Your order number is 12345.\""
+ ]
+ },
+ "execution_count": 19,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "response = chat.send_message(\n",
+ " Part.from_function_response(\n",
+ " name=\"place_order\",\n",
+ " response={\n",
+ " \"content\": api_response,\n",
+ " },\n",
+ " ),\n",
+ ")\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "And you're done!\n",
+ "\n",
+ "You were able to have a multi-turn conversation with the Gemini model using function calls, handling payloads, and generating natural language summaries that incorporated the information from the external systems."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Address example: Using Function Calling to geocode addresses with a maps API"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this example, you'll use the text modality in the Gemini API to define a function that takes multiple parameters as inputs. You'll use the function call response to then make a live API call to convert an address to latitude and longitude coordinates.\n",
+ "\n",
+ "Start by defining a function declaration and wrapping it in a tool:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "get_location = FunctionDeclaration(\n",
+ " name=\"get_location\",\n",
+ " description=\"Get latitude and longitude for a given location\",\n",
+ " parameters={\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"poi\": {\"type\": \"string\", \"description\": \"Point of interest\"},\n",
+ " \"street\": {\"type\": \"string\", \"description\": \"Street name\"},\n",
+ " \"city\": {\"type\": \"string\", \"description\": \"City name\"},\n",
+ " \"county\": {\"type\": \"string\", \"description\": \"County name\"},\n",
+ " \"state\": {\"type\": \"string\", \"description\": \"State name\"},\n",
+ " \"country\": {\"type\": \"string\", \"description\": \"Country name\"},\n",
+ " \"postal_code\": {\"type\": \"string\", \"description\": \"Postal code\"},\n",
+ " },\n",
+ " },\n",
+ ")\n",
+ "\n",
+ "location_tool = Tool(\n",
+ " function_declarations=[get_location],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this example, you're asking the Gemini model to extract components of the address into specific fields within a structured data object. You can then map this data to specific input fields to use with your REST API or client library.\n",
+ "\n",
+ "Send a prompt that includes an address, such as:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "function_call {\n",
+ " name: \"get_location\"\n",
+ " args {\n",
+ " fields {\n",
+ " key: \"street\"\n",
+ " value {\n",
+ " string_value: \"1600 Amphitheatre Pkwy\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"state\"\n",
+ " value {\n",
+ " string_value: \"CA\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"postal_code\"\n",
+ " value {\n",
+ " string_value: \"94043\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"country\"\n",
+ " value {\n",
+ " string_value: \"US\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"city\"\n",
+ " value {\n",
+ " string_value: \"Mountain View\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 21,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "prompt = \"\"\"\n",
+ "I want to get the coordinates for the following address:\n",
+ "1600 Amphitheatre Pkwy, Mountain View, CA 94043, US\n",
+ "\"\"\"\n",
+ "\n",
+ "response = model.generate_content(\n",
+ " prompt,\n",
+ " generation_config=GenerationConfig(temperature=0),\n",
+ " tools=[location_tool],\n",
+ ")\n",
+ "response.candidates[0].content.parts[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now you can reference the parameters from the function call and make a live API request:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[{'place_id': 377680635,\n",
+ " 'licence': 'Data © OpenStreetMap contributors, ODbL 1.0. http://osm.org/copyright',\n",
+ " 'osm_type': 'node',\n",
+ " 'osm_id': 2192620021,\n",
+ " 'lat': '37.4217636',\n",
+ " 'lon': '-122.084614',\n",
+ " 'class': 'office',\n",
+ " 'type': 'it',\n",
+ " 'place_rank': 30,\n",
+ " 'importance': 0.6949356759210291,\n",
+ " 'addresstype': 'office',\n",
+ " 'name': 'Google Headquarters',\n",
+ " 'display_name': 'Google Headquarters, 1600, Amphitheatre Parkway, Mountain View, Santa Clara County, California, 94043, United States',\n",
+ " 'boundingbox': ['37.4217136', '37.4218136', '-122.0846640', '-122.0845640']}]"
+ ]
+ },
+ "execution_count": 22,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "x = response.candidates[0].content.parts[0].function_call.args\n",
+ "\n",
+ "url = \"https://nominatim.openstreetmap.org/search?\"\n",
+ "for i in x:\n",
+ " url += '{}=\"{}\"&'.format(i, x[i])\n",
+ "url += \"format=json\"\n",
+ "\n",
+ "headers = {\n",
+ " \"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36\"\n",
+ "}\n",
+ "x = requests.get(url, headers=headers)\n",
+ "content = x.json()\n",
+ "content\n",
+ "\n",
+ "# Note: if you get a JSONDecodeError when running this cell, try modifying the\n",
+ "# user agent string in the `headers=` line of code in this cell and re-run."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Great work! You were able to define a function that the Gemini model used to extract the relevant parameters from the prompt. Then you made a live API call to obtain the coordinates of the specified location.\n",
+ "\n",
+ "Here we used the [OpenStreetMap Nominatim API](https://nominatim.openstreetmap.org/ui/search.html) to geocode an address to keep the number of steps in this tutorial to a reasonable number. If you're working with large amounts of address or geolocation data, you can also use the [Google Maps Geocoding API](https://developers.google.com/maps/documentation/geocoding), or any mapping service with an API!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Logging example: Using Function Calling for entity extraction only"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In the previous examples, we made use of the entity extraction functionality within Gemini Function Calling so that we could pass the resulting parameters to a REST API or client library. However, you might want to only perform the entity extraction step with Gemini Function Calling and stop there without actually calling an API. You can think of this functionality as a convenient way to transform unstructured text data into structured fields.\n",
+ "\n",
+ "In this example, you'll build a log extractor that takes raw log data and transforms it into structured data with details about error messages.\n",
+ "\n",
+ "You'll start by specifying a function declaration that represents the schema of the Function Call:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "extract_log_data = FunctionDeclaration(\n",
+ " name=\"extract_log_data\",\n",
+ " description=\"Extract details from error messages in raw log data\",\n",
+ " parameters={\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"locations\": {\n",
+ " \"type\": \"array\",\n",
+ " \"description\": \"Errors\",\n",
+ " \"items\": {\n",
+ " \"description\": \"Details of the error\",\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"error_message\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"Full error message\",\n",
+ " },\n",
+ " \"error_code\": {\"type\": \"string\", \"description\": \"Error code\"},\n",
+ " \"error_type\": {\"type\": \"string\", \"description\": \"Error type\"},\n",
+ " },\n",
+ " },\n",
+ " }\n",
+ " },\n",
+ " },\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can then define a tool for the generative model to call that includes the `extract_log_data`:\n",
+ "\n",
+ "Define a tool for the Gemini model to use that includes the log extractor function:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "extraction_tool = Tool(\n",
+ " function_declarations=[extract_log_data],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can then pass the sample log data to the Gemini model. The model will call the log extractor function, and the model output will be a Function Call response."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "name: \"extract_log_data\"\n",
+ "args {\n",
+ " fields {\n",
+ " key: \"locations\"\n",
+ " value {\n",
+ " list_value {\n",
+ " values {\n",
+ " struct_value {\n",
+ " fields {\n",
+ " key: \"error_type\"\n",
+ " value {\n",
+ " string_value: \"ERROR\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_message\"\n",
+ " value {\n",
+ " string_value: \"Could not process image upload: Unsupported file format.\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_code\"\n",
+ " value {\n",
+ " string_value: \"308\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " values {\n",
+ " struct_value {\n",
+ " fields {\n",
+ " key: \"error_type\"\n",
+ " value {\n",
+ " string_value: \"ERROR\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_message\"\n",
+ " value {\n",
+ " string_value: \"Service dependency unavailable (payment gateway). Retrying...\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_code\"\n",
+ " value {\n",
+ " string_value: \"5522\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " values {\n",
+ " struct_value {\n",
+ " fields {\n",
+ " key: \"error_type\"\n",
+ " value {\n",
+ " string_value: \"ERROR\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_message\"\n",
+ " value {\n",
+ " string_value: \"Application crashed due to out-of-memory exception.\"\n",
+ " }\n",
+ " }\n",
+ " fields {\n",
+ " key: \"error_code\"\n",
+ " value {\n",
+ " string_value: \"9001\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}"
+ ]
+ },
+ "execution_count": 25,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "prompt = \"\"\"\n",
+ "[15:43:28] ERROR: Could not process image upload: Unsupported file format. (Error Code: 308)\n",
+ "[15:44:10] INFO: Search index updated successfully. \n",
+ "[15:45:02] ERROR: Service dependency unavailable (payment gateway). Retrying... (Error Code: 5522) \n",
+ "[15:45:33] ERROR: Application crashed due to out-of-memory exception. (Error Code: 9001) \n",
+ "\"\"\"\n",
+ "\n",
+ "response = model.generate_content(\n",
+ " prompt,\n",
+ " generation_config=GenerationConfig(temperature=0),\n",
+ " tools=[extraction_tool],\n",
+ ")\n",
+ "\n",
+ "response.candidates[0].content.parts[0].function_call"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The response includes a structured data object that contains the details of the error messages that appear in the log."
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "toc_visible": true
+ },
+ "environment": {
+ "kernel": "conda-root-py",
+ "name": "workbench-notebooks.m115",
+ "type": "gcloud",
+ "uri": "gcr.io/deeplearning-platform-release/workbench-notebooks:m115"
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.8"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}