-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update conversation > tools page with more information / examples #8113
Merged
Merged
Changes from 3 commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,24 +29,41 @@ export function getStaticProps(context) { | |
} | ||
|
||
|
||
Tools allow LLMs to query information to respond with current and relevant information. They are invoked only if the LLM requests to use one based on the user's message and the tool's description. | ||
|
||
Tools allow LLMs to take action or query information so it can respond with up to date information. There are a few different ways to define LLM tools in the Amplify AI kit. | ||
There are a few different ways to define LLM tools in the Amplify AI kit. | ||
|
||
1. Model tools | ||
2. Query tools | ||
3. Lambda tools | ||
|
||
The easiest way you can define tools for the LLM to use is with data models and custom queries in your data schema. When you define tools in your data schema, Amplify will take care of all of the heavy lifting required to properly implement such as: | ||
|
||
* **Describing the tools to the LLM:** because each tool is a custom query or data model that is defined in the schema, Amplify knows the input shape needed for that tool | ||
* **Invoking the tool with the right parameters:** after the LLM responds it wants to call a tool, the code that initially called the LLM needs to then run that code. | ||
* **Maintaining the caller identity and authorization:** we don't want users to have access to more data through the LLM than they normally would, so when the LLM wants to invoke a tool we will call it with the user's identity. For example, if the LLM wanted to invoke a query to list Todos, it would only return the todos of the user and not everyone's todos. | ||
* **Describing the tools to the LLM:** Each tool definition is an Amplify model query or custom query that is defined in the schema. Amplify knows the input parameters needed for that tool and describes them to the LLM. | ||
* **Invoking the tool with the right parameters:** After the LLM requests to use a tool with necessary input parameters, the conversation handler Lambda function invokes the tool, returns the result to the LLM, and continues the conversation. | ||
* **Maintaining the caller identity and authorization:** Through tools, the LLM can only access data that the application user has access to. When the LLM requests to invoke a tool, we will call it with the user's identity. For example, if the LLM wanted to invoke a query to list Todos, it would only return the todos that user has access to. | ||
|
||
## Model tools | ||
|
||
You can give the LLM access to your data models by referencing them in an `a.ai.dataTool()` with a reference to a model in your data schema. | ||
You can give the LLM access to your data models by referencing them in an `a.ai.dataTool()` with a reference to a model in your data schema. This requires that the model uses at least one of the following authorization strategies: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are missing a sentence about "Model tools". What I look for is something around "Model tools are xxxxx" |
||
|
||
**[Per user data access](https://docs.amplify.aws/react/build-a-backend/data/customize-authz/per-user-per-owner-data-access/)** | ||
- `owner()` | ||
- `ownerDefinedIn()` | ||
- `ownersDefinedIn()` | ||
|
||
**[Any signed-in user data access](https://docs.amplify.aws/react/build-a-backend/data/customize-authz/signed-in-user-data-access/)** | ||
- `authenticated()` | ||
|
||
**[Per user group data access](https://docs.amplify.aws/react/build-a-backend/data/customize-authz/user-group-based-data-access/)** | ||
- `group()` | ||
- `groupsDefinedIn()` | ||
- `groups()` | ||
- `groupsDefinedIn()` | ||
|
||
```ts title="amplify/data/resource.ts" | ||
import { type ClientSchema, a, defineData } from "@aws-amplify/backend"; | ||
|
||
```ts | ||
const schema = a.schema({ | ||
Post: a.model({ | ||
title: a.string(), | ||
|
@@ -59,9 +76,14 @@ const schema = a.schema({ | |
systemPrompt: 'Hello, world!', | ||
tools: [ | ||
a.ai.dataTool({ | ||
// The name of the tool as it will be referenced in the message to the LLM | ||
name: 'PostQuery', | ||
// The description of the tool provided to the LLM. | ||
// Use this to help the LLM understand when to use the tool. | ||
description: 'Searches for Post records', | ||
// A reference to the `a.model()` that the tool will use | ||
model: a.ref('Post'), | ||
// The operation to perform on the model | ||
modelOperation: 'list', | ||
}), | ||
], | ||
|
@@ -71,26 +93,29 @@ const schema = a.schema({ | |
|
||
This will let the LLM list and filter `Post` records. Because the data schema has all the information about the shape of a `Post` record, the data tool will provide that information to the LLM so you don't have to. Also, the Amplify AI kit handles authorizing the tool use requests based on the caller's identity. This means if you have an owner-based model, the LLM will only be able to query the user's records. | ||
|
||
*The only supported model operation currently is 'list'* | ||
<Callout type="info"> | ||
|
||
The only supported model operation is `'list'`. | ||
|
||
</Callout> | ||
|
||
## Query tools | ||
|
||
You can also give the LLM access to custom queries. You can define a custom query with a [Function](/[platform]/build-a-backend/functions/set-up-function/) handler and then reference that custom query as a tool. | ||
You can also give the LLM access to custom queries defined in your data schema. To do so, define a custom query with a [function or custom handler](https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/) and then reference that custom query as a tool. This requires that the custom query uses the `allow.authenticated()` authorization strategy. | ||
|
||
```ts title="amplify/data/resource.ts" | ||
// highlight-start | ||
import { type ClientSchema, a, defineData, defineFunction } from "@aws-amplify/backend"; | ||
// highlight-end | ||
|
||
// highlight-start | ||
export const getWeather = defineFunction({ | ||
name: 'getWeather', | ||
entry: 'getWeather.ts' | ||
entry: './getWeather.ts', | ||
environment: { | ||
API_ENDPOINT: 'MY_API_ENDPOINT', | ||
API_KEY: secret('MY_API_KEY'), | ||
}, | ||
}); | ||
// highlight-end | ||
|
||
const schema = a.schema({ | ||
// highlight-start | ||
getWeather: a.query() | ||
.arguments({ city: a.string() }) | ||
.returns(a.customType({ | ||
|
@@ -99,68 +124,73 @@ const schema = a.schema({ | |
})) | ||
.handler(a.handler.function(getWeather)) | ||
.authorization((allow) => allow.authenticated()), | ||
// highlight-end | ||
|
||
chat: a.conversation({ | ||
aiModel: a.ai.model('Claude 3 Haiku'), | ||
systemPrompt: 'You are a helpful assistant', | ||
// highlight-start | ||
tools: [ | ||
a.ai.dataTool({ | ||
name: 'getWeather', | ||
// The name of the tool as it will be referenced in the LLM prompt | ||
name: 'get_weather', | ||
// The description of the tool provided to the LLM. | ||
// Use this to help the LLM understand when to use the tool. | ||
description: 'Gets the weather for a given city', | ||
// A reference to the `a.query()` that the tool will invoke. | ||
query: a.ref('getWeather'), | ||
}), | ||
] | ||
// highlight-end | ||
}), | ||
}) | ||
.authorization((allow) => allow.owner()), | ||
}); | ||
``` | ||
|
||
Because the definition of the query itself has the shape of the inputs and outputs (arguments and returns), the Amplify data tool can automatically tell the LLM exactly how to call the custom query. | ||
|
||
<Callout> | ||
The Amplify data tool takes care of specifying the necessary input parameters to the LLM based on the query definition. | ||
|
||
The description of the tool is very important to help the LLM know when to use that tool. The more descriptive you are about what the tool does, the better. | ||
|
||
</Callout> | ||
|
||
Here is an example Lambda function handler for our `getWeather` query: | ||
Below is an illustrative example of a Lambda function handler for the `getWeather` query. | ||
|
||
```ts title="amplify/data/getWeather.ts" | ||
import { env } from "$amplify/env/getWeather"; | ||
import type { Schema } from "./resource"; | ||
|
||
export const handler: Schema["getWeather"]["functionHandler"] = async ( | ||
event | ||
) => { | ||
// This returns a mock value, but you can connect to any API, database, or other service | ||
return { | ||
value: 42, | ||
unit: 'C' | ||
}; | ||
const { city } = event.arguments; | ||
if (!city) { | ||
throw new Error('City is required'); | ||
} | ||
|
||
const url = `${env.API_ENDPOINT}?city=${encodeURIComponent(city)}`; | ||
const request = new Request(url, { | ||
headers: { | ||
Authorization: `Bearer ${env.API_KEY}` | ||
} | ||
}); | ||
|
||
const response = await fetch(request); | ||
const weather = await response.json(); | ||
return weather; | ||
} | ||
``` | ||
|
||
Lastly, you will need to update your **`amplify/backend.ts`** file to include the newly defined `getWeather` function. | ||
|
||
```ts title="amplify/backend.ts" | ||
// highlight-start | ||
import { getWeather } from "./data/resource"; | ||
// highlight-end | ||
import { defineBackend } from '@aws-amplify/backend'; | ||
import { auth } from './auth/resource'; | ||
import { data, getWeather } from './data/resource'; | ||
|
||
const backend = defineBackend({ | ||
auth, | ||
data, | ||
// highlight-start | ||
getWeather | ||
// highlight-end | ||
}); | ||
``` | ||
|
||
|
||
## Connect to any AWS Service | ||
### Connect to any AWS Service | ||
|
||
You can connect to any AWS service by defining a custom query and calling that service in the function handler. Then you will need to provide the Lambda with the proper permissions to call the AWS service. | ||
You can connect to any AWS service by defining a custom query and calling that service in the function handler. To properly authorize the custom query function to call the AWS service, you will need to provide the Lambda with the proper permissions. | ||
|
||
```ts title="amplify/backend.ts" | ||
import { defineBackend } from "@aws-amplify/backend"; | ||
|
@@ -185,25 +215,23 @@ backend.getWeather.resources.lambda.addToRolePolicy( | |
) | ||
``` | ||
|
||
|
||
|
||
## Custom Lambda Tools | ||
|
||
Conversation routes can also have completely custom tools defined in a Lambda handler. | ||
You can also define a tool that executes in the conversation handler AWS Lambda function. This is useful if you want to define a tool that is not related to your data schema or that does simple tasks within the Lambda function runtime. | ||
|
||
### Install the backend-ai package | ||
First install the `@aws-amplify/backend-ai` package. | ||
|
||
```bash | ||
```bash title="Terminal" | ||
npm install @aws-amplify/backend-ai | ||
``` | ||
|
||
### Create a custom conversation handler function | ||
Define a custom conversation handler function in your data schema and reference the function in the `handler` property of the `a.conversation()` definition. | ||
|
||
```ts title="amplify/data/resource.ts" | ||
import { type ClientSchema, a, defineData } from '@aws-amplify/backend'; | ||
import { defineConversationHandlerFunction } from '@aws-amplify/backend-ai/conversation'; | ||
|
||
const chatHandler = defineConversationHandlerFunction({ | ||
export const chatHandler = defineConversationHandlerFunction({ | ||
entry: './chatHandler.ts', | ||
name: 'customChatHandler', | ||
models: [ | ||
|
@@ -217,52 +245,88 @@ const schema = a.schema({ | |
systemPrompt: "You are a helpful assistant", | ||
handler: chatHandler, | ||
}) | ||
.authorization((allow) => allow.owner()), | ||
}) | ||
``` | ||
|
||
### Implement the custom handler | ||
Define the executable tool(s) and handler. Below is an illustrative example of a custom conversation handler function that defines a `calculator` tool. | ||
|
||
```ts title="amplify/data/chatHandler.ts" | ||
import { | ||
ConversationTurnEvent, | ||
handleConversationTurnEvent, | ||
} from '@aws-amplify/ai-constructs/conversation/runtime'; | ||
import { createExecutableTool } from '@aws-amplify/backend-ai/conversation/runtime'; | ||
|
||
const thermometer = createExecutableTool( | ||
'thermometer', | ||
'Returns current temperature in a city', | ||
{ | ||
json: { | ||
type: 'object', | ||
'properties': { | ||
'city': { | ||
'type': 'string', | ||
'description': 'The city name' | ||
} | ||
createExecutableTool, | ||
handleConversationTurnEvent | ||
} from '@aws-amplify/backend-ai/conversation/runtime'; | ||
|
||
const jsonSchema = { | ||
json: { | ||
type: 'object', | ||
properties: { | ||
'operator': { | ||
'type': 'string', | ||
'enum': ['+', '-', '*', '/'], | ||
'description': 'The arithmetic operator to use' | ||
}, | ||
required: ['city'] | ||
} | ||
}, | ||
'operands': { | ||
'type': 'array', | ||
'items': { | ||
'type': 'number' | ||
}, | ||
'minItems': 2, | ||
'maxItems': 2, | ||
'description': 'Two numbers to perform the operation on' | ||
} | ||
}, | ||
required: ['operator', 'operands'] | ||
} | ||
} as const; | ||
// declare as const to allow the input type to be derived from the JSON schema in the tool handler definition. | ||
|
||
const calculator = createExecutableTool( | ||
'calculator', | ||
'Returns the result of a simple calculation', | ||
jsonSchema, | ||
// input type is derived from the JSON schema | ||
(input) => { | ||
if (input.city === 'Seattle') { | ||
return Promise.resolve({ | ||
text: '75F', | ||
}); | ||
const [a, b] = input.operands; | ||
switch (input.operator) { | ||
case '+': return Promise.resolve({ text: (a + b).toString() }); | ||
case '-': return Promise.resolve({ text: (a - b).toString() }); | ||
case '*': return Promise.resolve({ text: (a * b).toString() }); | ||
case '/': | ||
if (b === 0) throw new Error('Division by zero'); | ||
return Promise.resolve({ text: (a / b).toString() }); | ||
default: | ||
throw new Error('Invalid operator'); | ||
} | ||
return Promise.resolve({ | ||
text: 'unknown' | ||
}) | ||
}, | ||
); | ||
|
||
/** | ||
* Handler with simple tool. | ||
*/ | ||
export const handler = async (event: ConversationTurnEvent) => { | ||
await handleConversationTurnEvent(event, { | ||
tools: [thermometer], | ||
tools: [calculator], | ||
}); | ||
}; | ||
``` | ||
|
||
Note that we throw an error in the `calculator` tool example above if the input is invalid. This error is surfaced to the LLM by the conversation handler function. Depending on the error message, the LLM may try to use the tool again with different input or completing its response with test for the user. | ||
|
||
Lastly, update your backend definition to include the newly defined `chatHandler` function. | ||
|
||
```ts title="amplify/backend.ts" | ||
import { defineBackend } from '@aws-amplify/backend'; | ||
import { auth } from './auth/resource'; | ||
import { data, chatHandler } from './data/resource'; | ||
|
||
defineBackend({ | ||
auth, | ||
data, | ||
chatHandler, | ||
}); | ||
``` | ||
|
||
### Best Practices | ||
|
||
- Validate and sanitize any input from the LLM before using it in your application, e.g. don't use it directly in a database query or use `eval()` to execute it. | ||
- Handle errors gracefully and provide meaningful error messages. | ||
- Log and monitor tool usage to detect potential misuse or issues. |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few feels too vague for documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are actually four 🙈
The last one is client tools. Let's update this to four when we add the client tool docs.