Skip to content

Commit

Permalink
Support for Anthropic added
Browse files Browse the repository at this point in the history
  • Loading branch information
adhityan committed Apr 26, 2024
1 parent c239996 commit deb2694
Show file tree
Hide file tree
Showing 9 changed files with 219 additions and 50 deletions.
69 changes: 47 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,10 @@ The author(s) are looking to add core maintainers for this opensource project. R
- [How to request more loaders](#more-loaders-coming-soon)
- [LLMs](#llms)
- [OpenAI](#openai)
- [Azure OpenAI](#azure-openai)
- [Mistral](#mistral)
- [Hugging Face](#hugging-face)
- [Azure OpenAI](#azure-openai)
- [Anthropic](#anthropic)
- [Bring your own LLMs](#use-custom-llm-model)
- [Request support for new LLMs](#more-llms-coming-soon)
- [Embedding Models](#embedding-models)
Expand Down Expand Up @@ -358,6 +359,36 @@ const ragApplication = await new RAGApplicationBuilder()

**Note:** GPT 3.5 Turbo is used as the default model if you do not specifiy one.

## Azure OpenAI

In order to be able to use an OpenAI model on Azure, it first needs to be deployed. Please refer to [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) on how to deploy a model on Azure. To run this library, you will need to deploy two models -

- text-embedding-ada
- GPT-3.5-turbo (or the 4 series)

Once these models are deployed, using Azure OpenAI instead of the regular OpenAI is easy to do. Just follow these steps -

- Remove the `OPENAI_API_KEY` environment variable if you have set it already.

- Set the following environment variables -

```bash
# Set this to `azure`
OPENAI_API_TYPE=azure
# The API version you want to use
AZURE_OPENAI_API_VERSION=2023-05-15
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export AZURE_OPENAI_BASE_PATH=https://your-resource-name.openai.azure.com/openai/deployments
# The API key1 or key2 for your Azure OpenAI resource
export AZURE_OPENAI_API_KEY=<Your Azure OpenAI API key>
# The deployment name you used for your embedding model
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=text-embedding-ada-002
# The deployment name you used for your llm
AZURE_OPENAI_API_DEPLOYMENT_NAME=gpt-35-turbo
```

You can all set and can now run the Azure OpenAI LLMs using the [`OpenAi` model](#openai) steps detailed above.

## Mistral

To use Mirstal's models, you will need to get an API Key from Mistral. You can do this from their [console](https://console.mistral.ai/user/api-keys/). Once you have obtained a key, set Mistral as your LLM of choice -
Expand Down Expand Up @@ -397,35 +428,29 @@ const ragApplication = await new RAGApplicationBuilder()

To use these 'not-free' models via HuggingFace, you need to subscribe to their [Pro plan](https://huggingface.co/pricing) or create a custom [inference endpoint](https://ui.endpoints.huggingface.co/). It is possible to self host these models for free and run them locally via Ollama - support for which is coming soon.

## Azure OpenAI
## Anthropic

In order to be able to use an OpenAI model on Azure, it first needs to be deployed. Please refer to [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) on how to deploy a model on Azure. To run this library, you will need to deploy two models -
To use Anthropic's Claude models, you will need to get an API Key from Anthropic. You can do this from their [console](https://console.anthropic.com/settings/keys). Once you obtain a key, set it in the environment variable, like so -

- text-embedding-ada
- GPT-3.5-turbo (or the 4 series)
```bash
ANTHROPIC_API_KEY="<Your key>"
```

Once these models are deployed, using Azure OpenAI instead of the regular OpenAI is easy to do. Just follow these steps -
Once this is done, it is relatively easy to use Anthropic's Claude in your RAG application. Simply set Anthropic as your LLM of choice -

- Remove the `OPENAI_API_KEY` environment variable if you have set it already.
```TS
const ragApplication = await new RAGApplicationBuilder()
.setModel(new Anthropic())
```

- Set the following environment variables -
By default, the `claude-3-sonnet-20240229` model from Anthropic is used. If you want to use a different Anthropic model, you can specify it via the optional parameter to the Anthropic constructor, like so -

```bash
# Set this to `azure`
OPENAI_API_TYPE=azure
# The API version you want to use
AZURE_OPENAI_API_VERSION=2023-05-15
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export AZURE_OPENAI_BASE_PATH=https://your-resource-name.openai.azure.com/openai/deployments
# The API key1 or key2 for your Azure OpenAI resource
export AZURE_OPENAI_API_KEY=<Your Azure OpenAI API key>
# The deployment name you used for your embedding model
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=text-embedding-ada-002
# The deployment name you used for your llm
AZURE_OPENAI_API_DEPLOYMENT_NAME=gpt-35-turbo
```TS
const ragApplication = await new RAGApplicationBuilder()
.setModel(new Anthropic({ modelName: "..." }))
```

You can now run the Azure OpenAI LLMs using the [`OpenAi` model](#openai) detailed above.
You can read more about the various models provided by Anthropic [here](https://docs.anthropic.com/claude/docs/models-overview).

## Use custom LLM model

Expand Down
120 changes: 101 additions & 19 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 8 additions & 4 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@llm-tools/embedjs",
"version": "0.0.68",
"version": "0.0.71",
"description": "A NodeJS RAG framework to easily work with LLMs and custom datasets",
"main": "dist/index.js",
"types": "dist/index.d.ts",
Expand All @@ -24,6 +24,9 @@
"llm",
"gpt",
"openai",
"anthropic",
"claude",
"qdrant",
"chatgpt",
"hugging-face",
"mistral",
Expand All @@ -49,16 +52,17 @@
"homepage": "https://github.com/llm-tools/embedjs#readme",
"dependencies": {
"@huggingface/inference": "^2.6.7",
"@langchain/anthropic": "^0.1.16",
"@langchain/cohere": "^0.0.8",
"@langchain/community": "^0.0.51",
"@langchain/core": "^0.1.59",
"@langchain/community": "^0.0.52",
"@langchain/core": "^0.1.60",
"@langchain/mistralai": "^0.0.19",
"@langchain/openai": "^0.0.28",
"axios": "^1.6.8",
"confluence.js": "^1.7.4",
"debug": "^4.3.4",
"html-to-text": "^9.0.5",
"langchain": "^0.1.35",
"langchain": "^0.1.36",
"md5": "^2.3.0",
"pdf-parse-fork": "^1.2.0",
"sitemapper": "^3.1.8",
Expand Down
2 changes: 1 addition & 1 deletion src/core/rag-application.ts
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ export class RAGApplication {
await this.addLoader(loader);
}
}
this.debug('Initial loaders added');
this.debug('Initialized pre-loaders');
}

private async batchLoadEmbeddings(loaderUniqueId: string, formattedChunks: Chunk[]) {
Expand Down
2 changes: 2 additions & 0 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import { OpenAi3LargeEmbeddings } from './embeddings/openai-3large-embeddings.js
import { OpenAi3SmallEmbeddings } from './embeddings/openai-3small-embeddings.js';
import { Mistral } from './models/mistral-model.js';
import { HuggingFace } from './models/huggingface-model.js';
import { Anthropic } from './models/anthropic-model.js';

export {
RAGApplication,
Expand All @@ -48,4 +49,5 @@ export {
OpenAi3SmallEmbeddings,
Mistral,
HuggingFace,
Anthropic,
};
Loading

0 comments on commit deb2694

Please sign in to comment.