Skip to content

Commit

Permalink
Feat: New updates on the SDK (#58)
Browse files Browse the repository at this point in the history
* fea: New updated on the sdks

* feat: new updates on the SDK - releasing version v1.0.0

* feat: updating the name of the package to portkey_ai

* feat: adding the CI pipeline for publishing to NPM on release

* fix: reverting the changes on previos commit

* fix: adding the conventional-commit-check

* fix: adding metadata in feedback routes

* fix: updated feedbacks response, added streaming in post and added over ride function in utils

* fix: feedbacks and test cases

* fix: fixing the url on the prompt completions API

* fix: adding completions method to prompt api

* fix: lint issues

* fix: renaming the post method

* feat: adding the get_headers function to retrieve the headers

* fix: linting error fixes

* fix: fixing the last chunk in streaming mode

* feat: changes

* fix: fixing llama_index imports

* fix: fixing on the LLMmetadata on llamaindex. Assingned a default value to context window

* fix: removing headers when parsing the dict

* fix: adding types into llama_index and langchain

* fix: formating

* fix: removed the unused methods on the integrations and removed headers from the generic response

* fix: lint fixes

* doc: Udpating the readme file with the latest changes

* doc: URL updated
  • Loading branch information
noble-varghese authored Dec 8, 2023
1 parent 8451f33 commit 22192ff
Show file tree
Hide file tree
Showing 69 changed files with 2,816 additions and 3,094 deletions.
11 changes: 11 additions & 0 deletions .github/workflows/verify-conventional-commits.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
name: verify-conventional-commits

on: [pull_request]

jobs:
conventional-commits-checker:
runs-on: ubuntu-latest
steps:
- name: verify conventional commits
uses: taskmedia/[email protected]

3 changes: 2 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ help: ## Show all Makefile targets
.PHONY: format lint
format: ## Run code formatter: black
black .
ruff check . --fix
lint: ## Run linters: mypy, black, ruff
mypy .
black . --check
Expand All @@ -24,7 +25,7 @@ build:

upload:
python -m pip install twine
python -m twine upload dist/portkey-ai-*
python -m twine upload dist/portkey_ai-*
rm -rf dist

dev:
Expand Down
144 changes: 10 additions & 134 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,148 +35,24 @@ $ export PORTKEY_API_KEY=PORTKEY_API_KEY
#### Now, let's make a request with GPT-4

```py
import portkey
from portkey import Config, LLMOptions
from portkey_ai import Portkey

portkey.config = Config(
mode="single",
llms=LLMOptions(provider="openai", api_key="OPENAI_API_KEY")
# Construct a client with a virtual key
portkey = Portkey(
api_key="PORTKEY_API_KEY",
virtual_key="VIRTUAL_KEY"
)

r = portkey.ChatCompletions.create(
model="gpt-4",
messages=[
{"role": "user","content": "Hello World!"}
]
completion = portkey.chat.completions.create(
messages = [{ "role": 'user', "content": 'Say this is a test' }],
model = 'gpt-3.5-turbo'
)
print(completion)
```

Portkey fully adheres to the OpenAI SDK signature. This means that you can instantly switch to Portkey and start using Portkey's advanced production features right out of the box.


## **🪜 Detailed Integration Guide**

**4 Steps to Integrate the SDK**
1. Get your virtual key for AI providers.
2. Construct your LLM, add Portkey features, provider features, and prompt.
3. Construct the Portkey client and set your usage mode.
4. Now call Portkey regularly like you would call your OpenAI constructor.

Let's dive in! If you are an advanced user and want to directly jump to various full-fledged examples, [click here](https://github.com/Portkey-AI/portkey-python-sdk/tree/main/examples).

---

### **Step 1️⃣ : Get your Virtual Keys for AI providers**

Navigate to the "Virtual Keys" page on [Portkey](https://app.portkey.ai/) and hit the "Add Key" button. Choose your AI provider and assign a unique name to your key. Your virtual key is ready!

### **Step 2️⃣ : Construct your LLM, add Portkey features, provider features, and prompt**

**Portkey Features**:
You can find a comprehensive [list of Portkey features here](#📔-list-of-portkey-features). This includes settings for caching, retries, metadata, and more.

**Provider Features**:
Portkey is designed to be flexible. All the features you're familiar with from your LLM provider, like `top_p`, `top_k`, and `temperature`, can be used seamlessly. Check out the [complete list of provider features here](https://github.com/Portkey-AI/portkey-python-sdk/blob/af0814ebf4f1961b5dfed438918fe68b26ef5f1e/portkey/api_resources/utils.py#L137).

**Setting the Prompt Input**:
This param lets you override any prompt that is passed during the completion call - set a model-specific prompt here to optimise the model performance. You can set the input in two ways. For models like Claude and GPT3, use `prompt` = `(str)`, and for models like GPT3.5 & GPT4, use `messages` = `[array]`.


Here's how you can combine everything:

```python
from portkey import LLMOptions

# Portkey Config
provider = "openai"
virtual_key = "key_a"
trace_id = "portkey_sdk_test"

# Model Settings
model = "gpt-4"
temperature = 1

# User Prompt
messages = [{"role": "user", "content": "Who are you?"}]

# Construct LLM
llm = LLMOptions(provider=provider, virtual_key=virtual_key, trace_id=trace_id, model=model, temperature=temperature)
```

### **Step 3️⃣ : Construct the Portkey Client**

Portkey client's config takes 3 params: `api_key`, `mode`, `llms`.

* `api_key`: You can set your Portkey API key here or with `os.ennviron` as done above.
* `mode`: There are **3** modes - Single, Fallback, Loadbalance.
* **Single** - This is the standard mode. Use it if you do not want Fallback OR Loadbalance features.
* **Fallback** - Set this mode if you want to enable the Fallback feature.
* **Loadbalance** - Set this mode if you want to enable the Loadbalance feature.
* `llms`: This is an array where we pass our LLMs constructed using the LLMOptions constructor.

```py
import portkey
from portkey import Config

portkey.config = Config(mode="single",llms=[llm])
```

### **Step 4️⃣ : Let's Call the Portkey Client!**

The Portkey client can do `ChatCompletions` and `Completions`.

Since our LLM is GPT4, we will use ChatCompletions:

```py
response = portkey.ChatCompletions.create(
messages=[{
"role": "user",
"content": "Who are you ?"
}]
)
print(response.choices[0].message)
```

You have integrated Portkey's Python SDK in just 4 steps!

---

## **🔁 Demo: Implementing GPT4 to GPT3.5 Fallback Using the Portkey SDK**

```py
import os
os.environ["PORTKEY_API_KEY"] = "PORTKEY_API_KEY" # Setting the Portkey API Key

import portkey
from portkey import Config, LLMOptions

# Let's construct our LLMs.
llm1 = LLMOptions(provider="openai", model="gpt-4", virtual_key="key_a"),
llm2 = LLMOptions(provider="openai", model="gpt-3.5-turbo", virtual_key="key_a")

# Now let's construct the Portkey client where we will set the fallback logic
portkey.config = Config(mode="fallback",llms=[llm1,llm2])

# And, that's it!
response = portkey.ChatCompletions.create()
print(response.choices[0].message)
```

## **📔 Full List of Portkey Config**

| Feature | Config Key | Value(Type) | Required |
|---------------------|-------------------------|--------------------------------------------------|-------------|
| Provider Name | `provider` | `string` | ✅ Required |
| Model Name | `model` | `string` | ✅ Required |
| Virtual Key OR API Key | `virtual_key` or `api_key` | `string` | ✅ Required (can be set externally) |
| Cache Type | `cache_status` | `simple`, `semantic` | ❔ Optional |
| Force Cache Refresh | `cache_force_refresh` | `True`, `False` (Boolean) | ❔ Optional |
| Cache Age | `cache_age` | `integer` (in seconds) | ❔ Optional |
| Trace ID | `trace_id` | `string` | ❔ Optional |
| Retries | `retry` | `{dict}` with two required keys: `"attempts"` which expects integers in [0,5] and `"on_status_codes"` which expects array of status codes like [429,502] <br> `Example`: { "attempts": 5, "on_status_codes":[429,500] } | ❔ Optional |
| Metadata | `metadata` | `json object` [More info](https://docs.portkey.ai/key-features/custom-metadata) | ❔ Optional |


## **🤝 Supported Providers**

|| Provider | Support Status | Supported Endpoints |
Expand All @@ -190,7 +66,7 @@ print(response.choices[0].message)

---

#### [📝 Full Documentation](https://docs.portkey.ai/) | [🛠️ Integration Requests](https://github.com/Portkey-AI/portkey-python-sdk/issues) |
#### [📝 Full Documentation](https://docs.portkey.ai/docs) | [🛠️ Integration Requests](https://github.com/Portkey-AI/portkey-python-sdk/issues) |

<a href="https://twitter.com/intent/follow?screen_name=portkeyai"><img src="https://img.shields.io/twitter/follow/portkeyai?style=social&logo=twitter" alt="follow on Twitter"></a>
<a href="https://discord.gg/sDk9JaNfK8" target="_blank"><img src="https://img.shields.io/discord/1143393887742861333?logo=discord" alt="Discord"></a>
Expand Down
Loading

0 comments on commit 22192ff

Please sign in to comment.