Skip to content

Commit

Permalink
Sync wasm experiment branch to 0.20 (#213)
Browse files Browse the repository at this point in the history
* Update to Assistants example (#146)

* Update to Assistants example

* Update examples/assistants/src/main.rs

update api config for consistency and secutity

Co-authored-by: Himanshu Neema <[email protected]>

* added assistant creation

* exit, deconstruct assistant, improved readme

---------

Co-authored-by: Himanshu Neema <[email protected]>

* Add examples tool-call and tool-call-stream (#153)

* add names (#150)

* Link to openai-func-enums (#152)

* Link to openai-func-enums

* Link to openai-func-enums

* Update async-openai/README.md

---------

Co-authored-by: Himanshu Neema <[email protected]>

* In memory files (#154)

* Added ability to use in-memory files (Bytes, vec[u8])

* Removed unnecessary trait impls

* Polished example

* Spec, readme, and crate description updates (#156)

* get latest spec

* update description

* add WASM

* WASM support on experiments branch

* chore: Release

* Make tool choice lower case (#158)

* Fix: post_form to be Sendable (#157)

* changed to allow Send.

* add simple tests for sendable

* fix test name

* chore: Release

* Add support for rustls-webpki-roots (#168)

* Refactor `types` module (#170)

* Document `impl_from!` macro

* Fix up `impl_from!` docs

* Documents `impl_default!` macro

* Document `impl_input!` macro

* Factor out types from `assistants` module in `types`

* Factor out `model`

* Factor out `audio`

* Factor out `image`

* Factor out `file`

* Factor out `fine_tune`

* Factor out `moderation`

* Factor out `edit`

* Factor out `fine_tuning`

* Factor out missed `DeleteModelResponse` into `model`

* Factor out `embedding`

* Factor out `chat`

* Factor out `completion` and eliminate `types`

* Satisfy clippy

---------

Co-authored-by: Sharif Haason <[email protected]>

* Sync updates from Spec (#171)

* updates to doc comments and types

* deprecated

* update ChatCompletionFunctions to FunctionObject

* More type updates

* add logprobs field

* update from spec

* updated spec

* fixes suggested by cargo clippy

* add query param to list files (#172)

* chore: Release

* Optional model in ModifyAssistantRequest (#174)

All fields (including model) are optional in OpenAI API.

* update contribution guidelines (#182)

* update contribution guidelines

* fix link

* update

* consistency

* Code of conduct

* chore: Release

* fix file test by providing query param

* Added dimensions param to embedding request (#185)

* chore: Release

* fix: CreateTranscriptionRequest language field not convert (#188)

* chore: Release

* Add usage information to the run object (#195)

* Updates from Spec (#196)

* updates from spec

* remove Edits

* remove Fine-Tunes (was deprecated)

* update spec

* cargo fix

* cargo fmt

* chore: Release

* Add Client::build for full customizability during instantiation (#197)

* Change std::sleep to tokio's sleep (#200)

* chore: Release

* add support for base64 embeddings (#190)

* add support for base64 embeddings

* Base64Embedding is an implementation detail

* feat: separate Embeddings::create_base64 method

* chore: use newtype for hosting base64 decoding instead

* chore: remove unused error variant

* Add vision-chat example (#203)

Example matches quickstart from https://platform.openai.com/docs/guides/vision
It showcases struct derived from ChatCompletionRequestMessageContent

* Update Audio APIs from updated spec (#202)

* Implement CreateTranscriptRequest::response_granularities

This PR adds support for `AudioResponseFormat::VerboseJson` and
`TimestampGranularity`, including updated example code. These were
defined as types before, but not fully implemented.

Implements #201.

* Modify transcription API to be more like spec

- Rename `CreateTranscriptionRespose` to `CreateTranscriptionResponseJson` (to match API spec)
- Add `CreateTranscriptionResponseVerboseJson` and `transcribe_verbose_json`
- Add `transcribe_raw` for SRT output
- Add `post_form_raw`
- Update example code

* Upgrade dependencies: Rust crates in Cargo.toml (#204)

* upgrade reqwest

* update reqwest-eventsource

* cargo test working (#207)

* fix: cargo fmt and compiler warnings fixes (#208)

* cargo fmt

* fix imports

* chore: Release

* fixed problems due to code sync

* update worker dependency to resolve build issue

* update test to fix test compilation issue

* add conditional imports

* change default of InputSource and bring back builders of file-related structs

* update doc

---------

Co-authored-by: Gravel Hill <[email protected]>
Co-authored-by: Himanshu Neema <[email protected]>
Co-authored-by: Frank Fralick <[email protected]>
Co-authored-by: Sam F <[email protected]>
Co-authored-by: David Weis <[email protected]>
Co-authored-by: yykt <[email protected]>
Co-authored-by: XTY <[email protected]>
Co-authored-by: sharif <[email protected]>
Co-authored-by: Sharif Haason <[email protected]>
Co-authored-by: Sebastian Sosa <[email protected]>
Co-authored-by: vmg-dev <[email protected]>
Co-authored-by: TAO <[email protected]>
Co-authored-by: turingbuilder <[email protected]>
Co-authored-by: Gabriel Bianconi <[email protected]>
Co-authored-by: Santhanagopalan Krishnamoorthy <[email protected]>
Co-authored-by: Adrien Wald <[email protected]>
Co-authored-by: Gabriel <[email protected]>
Co-authored-by: Eric Kidd <[email protected]>
Co-authored-by: Samuel Batissou Tiburcio <[email protected]>
  • Loading branch information
20 people authored Apr 10, 2024
1 parent 50d661f commit 580ff11
Show file tree
Hide file tree
Showing 41 changed files with 1,101 additions and 1,776 deletions.
10 changes: 5 additions & 5 deletions async-openai/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "async-openai"
version = "0.18.2"
version = "0.20.0"
authors = [
"Himanshu Neema"
]
Expand Down Expand Up @@ -33,16 +33,16 @@ native-tls-vendored = ["reqwest/native-tls-vendored"]

[dependencies]
backoff = {version = "0.4.0", features = ["futures"], optional = true }
base64 = "0.21.0"
base64 = "0.22.0"
futures = "0.3.26"
rand = "0.8.5"
reqwest = { version = "0.11.14", features = ["json", "stream", "multipart"],default-features = false }
reqwest-eventsource = "0.5.0"
reqwest = { version = "0.12.0", features = ["json", "stream", "multipart"],default-features = false }
reqwest-eventsource = "0.6.0"
serde = { version = "1.0.152", features = ["derive", "rc"] }
serde_json = "1.0.93"
thiserror = "1.0.38"
tracing = "0.1.37"
derive_builder = "0.12.0"
derive_builder = "0.20.0"
async-convert = "1.0.0"
secrecy = { version = "0.8.0", features=["serde"] }
tokio = { version = "1.25.0", features = ["fs", "macros"], optional = true }
Expand Down
9 changes: 2 additions & 7 deletions async-openai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,12 @@
- It's based on [OpenAI OpenAPI spec](https://github.com/openai/openai-openapi)
- Current features:
- [x] Assistants (Beta)
- [x] Audio (Whisper/TTS)
- [x] Audio
- [x] Chat
- [x] Completions (Legacy)
- [x] Edits (Deprecated)
- [x] Embeddings
- [x] Files
- [x] Fine-Tuning
- [x] Fine-Tunes (Deprecated)
- [x] Images
- [x] Microsoft Azure OpenAI Service
- [x] Models
Expand Down Expand Up @@ -125,7 +123,7 @@ All forms of contributions, such as new features requests, bug fixes, issues, do
A good starting point would be to look at existing [open issues](https://github.com/64bit/async-openai/issues).

To maintain quality of the project, a minimum of the following is a must for code contribution:
- **Documented**: Primary source of doc comments is description field from OpenAPI spec.
- **Names & Documentation**: All struct names, field names and doc comments are from OpenAPI spec. Nested objects in spec without names leaves room for making appropriate name.
- **Tested**: Examples are primary means of testing and should continue to work. For new features supporting example is required.
- **Scope**: Keep scope limited to APIs available in official documents such as [API Reference](https://platform.openai.com/docs/api-reference) or [OpenAPI spec](https://github.com/openai/openai-openapi/). Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
- **Consistency**: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.
Expand All @@ -135,9 +133,6 @@ This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policie
## Complimentary Crates
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.

## Complimentary Crates
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.


## License

Expand Down
27 changes: 25 additions & 2 deletions async-openai/src/audio.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
use bytes::Bytes;

use crate::{
config::Config,
error::OpenAIError,
types::{
CreateSpeechRequest, CreateSpeechResponse, CreateTranscriptionRequest,
CreateTranscriptionResponse, CreateTranslationRequest, CreateTranslationResponse,
CreateTranscriptionResponseJson, CreateTranscriptionResponseVerboseJson,
CreateTranslationRequest, CreateTranslationResponse,
},
Client,
};
Expand All @@ -23,12 +26,32 @@ impl<'c, C: Config> Audio<'c, C> {
pub async fn transcribe(
&self,
request: CreateTranscriptionRequest,
) -> Result<CreateTranscriptionResponse, OpenAIError> {
) -> Result<CreateTranscriptionResponseJson, OpenAIError> {
self.client
.post_form("/audio/transcriptions", request)
.await
}

/// Transcribes audio into the input language.
pub async fn transcribe_verbose_json(
&self,
request: CreateTranscriptionRequest,
) -> Result<CreateTranscriptionResponseVerboseJson, OpenAIError> {
self.client
.post_form("/audio/transcriptions", request)
.await
}

/// Transcribes audio into the input language.
pub async fn transcribe_raw(
&self,
request: CreateTranscriptionRequest,
) -> Result<Bytes, OpenAIError> {
self.client
.post_form_raw("/audio/transcriptions", request)
.await
}

/// Translates audio into into English.
pub async fn translate(
&self,
Expand Down
50 changes: 36 additions & 14 deletions async-openai/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ use crate::{
config::{Config, OpenAIConfig},
error::{map_deserialization_error, OpenAIError, WrappedError},
moderation::Moderations,
edit::Edits,
file::Files,
image::Images,
Chat, Completions, Embeddings, Models, FineTunes, FineTuning, Assistants, Threads, Audio};
Assistants, Audio, Chat, Completions, Embeddings, FineTuning, Models, Threads,
};

#[derive(Debug, Clone)]
/// Client is a container for config, backoff and http_client
Expand All @@ -42,6 +42,21 @@ impl Client<OpenAIConfig> {
}

impl<C: Config> Client<C> {
/// Create client with a custom HTTP client, OpenAI config, and backoff.
pub fn build(
http_client: reqwest::Client,
config: C,
#[cfg(feature = "backoff")]
backoff: backoff::ExponentialBackoff,
) -> Self {
Self {
http_client,
config,
#[cfg(feature = "backoff")]
backoff,
}
}

/// Create client with [OpenAIConfig] or [crate::config::AzureConfig]
pub fn with_config(config: C) -> Self {
Self {
Expand Down Expand Up @@ -84,12 +99,6 @@ impl<C: Config> Client<C> {
Chat::new(self)
}

/// To call [Edits] group related APIs using this client.
#[deprecated(since = "0.15.0", note = "By OpenAI")]
pub fn edits(&self) -> Edits<C> {
Edits::new(self)
}

/// To call [Images] group related APIs using this client.
pub fn images(&self) -> Images<C> {
Images::new(self)
Expand All @@ -105,12 +114,6 @@ impl<C: Config> Client<C> {
Files::new(self)
}

/// To call [FineTunes] group related APIs using this client.
#[deprecated(since = "0.15.0", note = "By OpenAI")]
pub fn fine_tunes(&self) -> FineTunes<C> {
FineTunes::new(self)
}

/// To call [FineTuning] group related APIs using this client.
pub fn fine_tuning(&self) -> FineTuning<C> {
FineTuning::new(self)
Expand Down Expand Up @@ -230,6 +233,25 @@ impl<C: Config> Client<C> {
self.execute(request_maker).await
}

/// POST a form at {path} and return the response body
pub(crate) async fn post_form_raw<F>(&self, path: &str, form: F) -> Result<Bytes, OpenAIError>
where
reqwest::multipart::Form: async_convert::TryFrom<F, Error = OpenAIError>,
F: Clone,
{
let request_maker = || async {
Ok(self
.http_client
.post(self.config.url(path))
.query(&self.config.query())
.headers(self.config.headers())
.multipart(async_convert::TryFrom::try_from(form.clone()).await?)
.build()?)
};

self.execute_raw(request_maker).await
}

/// POST a form at {path} and deserialize the response body
pub(crate) async fn post_form<O, F>(&self, path: &str, form: F) -> Result<O, OpenAIError>
where
Expand Down
26 changes: 0 additions & 26 deletions async-openai/src/edit.rs

This file was deleted.

83 changes: 80 additions & 3 deletions async-openai/src/embedding.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
use crate::{
config::Config,
error::OpenAIError,
types::{CreateEmbeddingRequest, CreateEmbeddingResponse},
types::{
CreateBase64EmbeddingResponse, CreateEmbeddingRequest, CreateEmbeddingResponse,
EncodingFormat,
},
Client,
};

Expand All @@ -23,14 +26,36 @@ impl<'c, C: Config> Embeddings<'c, C> {
&self,
request: CreateEmbeddingRequest,
) -> Result<CreateEmbeddingResponse, OpenAIError> {
if matches!(request.encoding_format, Some(EncodingFormat::Base64)) {
return Err(OpenAIError::InvalidArgument(
"When encoding_format is base64, use Embeddings::create_base64".into(),
));
}
self.client.post("/embeddings", request).await
}

/// Creates an embedding vector representing the input text.
///
/// The response will contain the embedding in base64 format.
pub async fn create_base64(
&self,
request: CreateEmbeddingRequest,
) -> Result<CreateBase64EmbeddingResponse, OpenAIError> {
if !matches!(request.encoding_format, Some(EncodingFormat::Base64)) {
return Err(OpenAIError::InvalidArgument(
"When encoding_format is not base64, use Embeddings::create".into(),
));
}

self.client.post("/embeddings", request).await
}
}

#[cfg(test)]
mod tests {
use crate::error::OpenAIError;
use crate::types::{CreateEmbeddingResponse, Embedding, EncodingFormat};
use crate::{types::CreateEmbeddingRequestArgs, Client};
use crate::types::{CreateEmbeddingResponse, Embedding};

#[tokio::test]
async fn test_embedding_string() {
Expand Down Expand Up @@ -122,9 +147,61 @@ mod tests {

assert!(response.is_ok());

let CreateEmbeddingResponse { mut data, ..} = response.unwrap();
let CreateEmbeddingResponse { mut data, .. } = response.unwrap();
assert_eq!(data.len(), 1);
let Embedding { embedding, .. } = data.pop().unwrap();
assert_eq!(embedding.len(), dimensions as usize);
}

#[tokio::test]
async fn test_cannot_use_base64_encoding_with_normal_create_request() {
let client = Client::new();

const MODEL: &str = "text-embedding-ada-002";
const INPUT: &str = "You shall not pass.";

let b64_request = CreateEmbeddingRequestArgs::default()
.model(MODEL)
.input(INPUT)
.encoding_format(EncodingFormat::Base64)
.build()
.unwrap();
let b64_response = client.embeddings().create(b64_request).await;
assert!(matches!(b64_response, Err(OpenAIError::InvalidArgument(_))));
}

#[tokio::test]
async fn test_embedding_create_base64() {
let client = Client::new();

const MODEL: &str = "text-embedding-ada-002";
const INPUT: &str = "CoLoop will eat the other qual research tools...";

let b64_request = CreateEmbeddingRequestArgs::default()
.model(MODEL)
.input(INPUT)
.encoding_format(EncodingFormat::Base64)
.build()
.unwrap();
let b64_response = client
.embeddings()
.create_base64(b64_request)
.await
.unwrap();
let b64_embedding = b64_response.data.into_iter().next().unwrap().embedding;
let b64_embedding: Vec<f32> = b64_embedding.into();

let request = CreateEmbeddingRequestArgs::default()
.model(MODEL)
.input(INPUT)
.build()
.unwrap();
let response = client.embeddings().create(request).await.unwrap();
let embedding = response.data.into_iter().next().unwrap().embedding;

assert_eq!(b64_embedding.len(), embedding.len());
for (b64, normal) in b64_embedding.iter().zip(embedding.iter()) {
assert!((b64 - normal).abs() < 1e-6);
}
}
}
1 change: 1 addition & 0 deletions async-openai/src/file.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ impl<'c, C: Config> Files<'c, C> {
}

#[cfg(test)]
#[cfg(not(feature = "wasm"))]
mod tests {
use crate::{types::CreateFileRequestArgs, Client};

Expand Down
Loading

0 comments on commit 580ff11

Please sign in to comment.