Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs #14

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
10 changes: 2 additions & 8 deletions docs/src/docs/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,8 @@
},
{
"type": "dir",
"name": "slack",
"label": "Slack",
"collapsed": true
},
{
"type": "dir",
"name": "discord",
"label": "Discord",
"name": "integrations",
"label": "Integrations",
"collapsed": true
}
]
15 changes: 14 additions & 1 deletion docs/src/docs/about.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
# About byorg-ai

This is main section about byorg-ai Framework
## Introduction

This is a framework for writing ai assistants. It helps with handling requests to an LLM provider.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

## Supported Integrations

- Slack
- Discord

:::info

byorg-ai does not provide you with inference or hosting options, we help you integrate your own.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

:::
40 changes: 40 additions & 0 deletions docs/src/docs/core/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,45 @@
"type": "file",
"name": "usage",
"label": "Usage"
},
{
"type": "file",
"name": "chat-model",
"label": "Chat Model"
},
{
"type": "file",
"name": "system-prompt",
"label": "System Prompt"
},
{
"type": "file",
"name": "context",
"label": "Context"
},
{
"type": "file",
"name": "plugins",
"label": "Plugins"
},
{
"type": "file",
"name": "tools",
"label": "Tools"
},
{
"type": "file",
"name": "references",
"label": "References"
},
{
"type": "file",
"name": "performance",
"label": "Performance"
},
{
"type": "file",
"name": "error-handling",
"label": "Error Handling"
}
]
49 changes: 49 additions & 0 deletions docs/src/docs/core/chat-model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Chat Model

## Providers and Adapter

In order to make development of our framework faster, we decided to use [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) as an interface for handling LLM models.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

[Here](https://sdk.vercel.ai/providers/ai-sdk-providers) are all providers integrated with Vercel.

### Providers Examples

```js
import { createOpenAI } from '@ai-sdk/openai';
import { createAzure } from '@ai-sdk/azure';
import { createMistral } from '@ai-sdk/mistral';

const azureProvider = createAzure({
resourceName: 'your-resource-name',
apiKey: 'your-api-key',
});

const openAiProvider = createOpenAI({
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
apiKey: 'your-api-key',
compatibility: 'strict',
});

const mistralProvider = createMistral({
// custom settings
});
```

After instantiating provider client, you need to wrap it into our VercelAdapter class:

```js
import { VercelChatModelAdapter } from '@callstack/byorg-core';

const openAiChatModel = new VercelChatModelAdapter({
languageModel: openAiModel,
});

const vercelChatModel = new VercelChatModelAdapter({
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
languageModel: azureProvider,
});

const mistralChatModel = new VercelChatModelAdapter({
languageModel: mistralProvider,
});
```

Now that `chatModel` is ready, let's discuss `systemPrompt` function.
40 changes: 40 additions & 0 deletions docs/src/docs/core/context.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Context

Context is an object that is holding informations about the currently processed message. It allows you to change the behaviour of your assistant in the runtime, or to change the flow of processed message.

Context can be manipulated by plugins either by adding, or removing values from it.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

In order to add typing for your own properties to the context, you need to create a file with type and override the typing.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

```js
declare module '@callstack/byorg-core' {
interface MessageRequestExtras {
// Here you can add your own properties
example?: string;
messagesCount?: number;
isAdmin?: boolea;
}
}

export {};
```

:::danger
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

All custom properties must be an optional as current context creation doesn't support default values for custom objects.

:::

After setting extras, you can get to it from context object:

```js
export const systemPrompt = (context: RequestContext): Promise<string> | string => {
if (context.extras.isAdmin) {
return `You are currently talking to an admin.`;
}

return `You are talking to user with regular permissions.`;
};
```

Now we'll go through the concept of `plugins` to get an understanding of how to modify the `context`
23 changes: 23 additions & 0 deletions docs/src/docs/core/error-handling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Error handling

In order to react for errors being thrown by the byorg, you can pass you own error handler.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

```js
function handleError(error: unknown): SystemResponse {
logger.error('Unhandled error:', error);

return {
role: 'system',
content: 'There was a problem with Assistant. Please try again later or contact administrator.',
error,
};
}

const app = createApp({
chatModel,
systemPrompt,
errorHandler: handleError,
});
```

Error handler allows you to implement custom reaction to thrown error and send feedback to user.
62 changes: 62 additions & 0 deletions docs/src/docs/core/performance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Performance

If you'd like to test your application performance, you can use `performance` object from the context.

```js
const slowPlugin: Promise<MessageResponse> = {
name: 'slow-plugin',
middleware: async (context, next): Promise<MessageResponse> => {

context.performance.markStart("SlowPluginPerformance");
await slowFunction();
context.performance.markEnd("SlowPluginPerformance");

// Continue middleware chain
return next();
},
};
```

After gathering your measures, you can access them through the same object
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

```js
const analyticsPlugin: Promise<MessageResponse> = {
name: 'analytics',
effects: [analyticsEffect]
};

async function analyticsEffect(context: RequestContext, response: MessageResponse): Promise<void> {
console.log(context.performance.getMeasureTotal("SlowPluginPerformance"))
}
```

## Measures vs Marks
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

Marks are just named 'sequences' for the performance tool to measure.
Let's say that you have a tool for your AI, and you'd like to check how it performs.
Issue might be that it's being triggered multiple times by AI. For that reason
one mark can be a part of multiple measures.
Single measure is constructed of two marks: `start` and `end`.

:::info

You can also access all measures and marks with `getMarks` and `getMeasures`

:::

## Default measures

byorg is gathering performance data out of the box.

```js
export const PerformanceMarks = {
processMessages: 'processMessages',
middlewareBeforeHandler: 'middleware:beforeHandler',
middlewareAfterHandler: 'middleware:afterHandler',
chatModel: 'chatModel',
toolExecution: 'toolExecution',
errorHandler: 'errorHandler',
} as const;
```

Middleware measures are gathered in two separate sections: before handling the response, and after that.
70 changes: 70 additions & 0 deletions docs/src/docs/core/plugins.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Plugins

Plugins are your way to modify the context before it reaches the inference and AI response phase. Each plugin consists of a name, optional middleware and optional effects.

## Middleware and Effects

Those two concepts are similar, but have one important difference.

- Middlewares are run before response to user
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
- Effects are run after

As an example, because of that, you can use Effects for gathering analytics about usage, or log to DB without addind any waiting time for the user.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

## Middleware Example

Let's create a middleware, that will enrich the context for our system prompt function.

```js
import { ApplicationPlugin, MessageResponse } from '@callstack/byorg-core';

const isAdminPlugin: Promise<MessageResponse> = {
name: 'is-admin',
middleware: async (context, next): Promise<MessageResponse> => {
const isAdmin = await checkIfUserIsAdmin(context.lastMessage.senderId)

context.extras.isAdmin = isAdmin;

// Continue middleware chain
return next();
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
},
};
```

## Effect example

Also, let's create an effect that will be run after we get a response from AI and process it. If the user is an admin, or the response ended with an error we'll do nothing, otherwise we will increase the messages count for a user.

```js
import { MessageResponse } from '@callstack/byorg-core';

const usageCountPlugin: Promise<MessageResponse> = {
name: 'usage-count',
effects: [counterEffect]
};

async function counterEffect(context: RequestContext, response: MessageResponse): Promise<void> {
const { isAdmin } = context.extras;

if(response.error || isAdmin) {
return;
}

await increaseMsgsCount(context.lastMessage.userId)
}
```
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

## Connecting Plugins

Now that we wrote our plugins, let's connect them to the app:

```js
const app = createApp({
chatModel,
plugins: [
usageCountPlugin,
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
isAdminPlugin
]
systemPrompt,
});
```
52 changes: 52 additions & 0 deletions docs/src/docs/core/references.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# References

This is a part of the context. References are objects contained in a list.
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
At any point of the flow you can use that information to either
inform the users, log most used sources, or anything else.

:::info

References are not being added for AI in any way. Unless you implement it that way.

:::

As an example, we will prepare middleware that will add relevant weather information.

```js
async function queryWeather(
params: { query: string },
context: RequestContext,
): Promise<string> {
const { query } = params;
const { references } = context;

const userWeatherInfo = await getWeather(query);

references.addReference({
title: userWeatherInfo.title,
url: userWeatherInfo.url
});

return formatWeatherInfo(userWeatherInfo);
}

const queryWeatherTool: ApplicationTool = {
name: 'query_weather',
description:
'Search weather data for requested city.',
parameters: z.object({
query: z.string().describe('City'),
}),
handler: queryWeather,
};

const cityWeatherPlugin: ApplicationPlugin = {
name: 'weather-tool',
tools: [queryWeatherTool],
};
```

That way, AI will receive information about requested city, and
context will get an information about the source of information.

references object has two functions `getReferences` and `addReference`
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
Loading