Skip to content

Commit

Permalink
Sync experiments branch with main 0.18.2 (#186)
Browse files Browse the repository at this point in the history
* Update to Assistants example (#146)

* Update to Assistants example

* Update examples/assistants/src/main.rs

update api config for consistency and secutity

Co-authored-by: Himanshu Neema <[email protected]>

* added assistant creation

* exit, deconstruct assistant, improved readme

---------

Co-authored-by: Himanshu Neema <[email protected]>

* Add examples tool-call and tool-call-stream (#153)

* add names (#150)

* Link to openai-func-enums (#152)

* Link to openai-func-enums

* Link to openai-func-enums

* Update async-openai/README.md

---------

Co-authored-by: Himanshu Neema <[email protected]>

* In memory files (#154)

* Added ability to use in-memory files (Bytes, vec[u8])

* Removed unnecessary trait impls

* Polished example

* Spec, readme, and crate description updates (#156)

* get latest spec

* update description

* add WASM

* WASM support on experiments branch

* chore: Release

* Make tool choice lower case (#158)

* Fix: post_form to be Sendable (#157)

* changed to allow Send.

* add simple tests for sendable

* fix test name

* chore: Release

* Add support for rustls-webpki-roots (#168)

* Refactor `types` module (#170)

* Document `impl_from!` macro

* Fix up `impl_from!` docs

* Documents `impl_default!` macro

* Document `impl_input!` macro

* Factor out types from `assistants` module in `types`

* Factor out `model`

* Factor out `audio`

* Factor out `image`

* Factor out `file`

* Factor out `fine_tune`

* Factor out `moderation`

* Factor out `edit`

* Factor out `fine_tuning`

* Factor out missed `DeleteModelResponse` into `model`

* Factor out `embedding`

* Factor out `chat`

* Factor out `completion` and eliminate `types`

* Satisfy clippy

---------

Co-authored-by: Sharif Haason <[email protected]>

* Sync updates from Spec (#171)

* updates to doc comments and types

* deprecated

* update ChatCompletionFunctions to FunctionObject

* More type updates

* add logprobs field

* update from spec

* updated spec

* fixes suggested by cargo clippy

* add query param to list files (#172)

* chore: Release

* Optional model in ModifyAssistantRequest (#174)

All fields (including model) are optional in OpenAI API.

* update contribution guidelines (#182)

* update contribution guidelines

* fix link

* update

* consistency

* Code of conduct

* chore: Release

* fix file test by providing query param

* Added dimensions param to embedding request (#185)

* chore: Release

---------

Co-authored-by: Gravel Hill <[email protected]>
Co-authored-by: Himanshu Neema <[email protected]>
Co-authored-by: Frank Fralick <[email protected]>
Co-authored-by: Sam F <[email protected]>
Co-authored-by: David Weis <[email protected]>
Co-authored-by: yykt <[email protected]>
Co-authored-by: XTY <[email protected]>
Co-authored-by: sharif <[email protected]>
Co-authored-by: Sharif Haason <[email protected]>
Co-authored-by: Sebastian Sosa <[email protected]>
Co-authored-by: vmg-dev <[email protected]>
  • Loading branch information
12 people authored Jan 31, 2024
1 parent 24da803 commit 50d661f
Show file tree
Hide file tree
Showing 8 changed files with 47 additions and 7 deletions.
2 changes: 1 addition & 1 deletion async-openai/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "async-openai"
version = "0.18.0"
version = "0.18.2"
authors = [
"Himanshu Neema"
]
Expand Down
17 changes: 15 additions & 2 deletions async-openai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,9 +118,22 @@ async fn main() -> Result<(), Box<dyn Error>> {

## Contributing

Thank you for your time to contribute and improve the project, I'd be happy to have you!
Thank you for taking the time to contribute and improve the project. I'd be happy to have you!

A good starting point would be existing [open issues](https://github.com/64bit/async-openai/issues).
All forms of contributions, such as new features requests, bug fixes, issues, documentation, testing, comments, [examples](../examples) etc. are welcome.

A good starting point would be to look at existing [open issues](https://github.com/64bit/async-openai/issues).

To maintain quality of the project, a minimum of the following is a must for code contribution:
- **Documented**: Primary source of doc comments is description field from OpenAPI spec.
- **Tested**: Examples are primary means of testing and should continue to work. For new features supporting example is required.
- **Scope**: Keep scope limited to APIs available in official documents such as [API Reference](https://platform.openai.com/docs/api-reference) or [OpenAPI spec](https://github.com/openai/openai-openapi/). Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
- **Consistency**: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.

This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct)

## Complimentary Crates
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.

## Complimentary Crates
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
Expand Down
22 changes: 22 additions & 0 deletions async-openai/src/embedding.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ impl<'c, C: Config> Embeddings<'c, C> {
#[cfg(test)]
mod tests {
use crate::{types::CreateEmbeddingRequestArgs, Client};
use crate::types::{CreateEmbeddingResponse, Embedding};

#[tokio::test]
async fn test_embedding_string() {
Expand Down Expand Up @@ -105,4 +106,25 @@ mod tests {

assert!(response.is_ok());
}

#[tokio::test]
async fn test_embedding_with_reduced_dimensions() {
let client = Client::new();
let dimensions = 256u32;
let request = CreateEmbeddingRequestArgs::default()
.model("text-embedding-3-small")
.input("The food was delicious and the waiter...")
.dimensions(dimensions)
.build()
.unwrap();

let response = client.embeddings().create(request).await;

assert!(response.is_ok());

let CreateEmbeddingResponse { mut data, ..} = response.unwrap();
assert_eq!(data.len(), 1);
let Embedding { embedding, .. } = data.pop().unwrap();
assert_eq!(embedding.len(), dimensions as usize);
}
}
3 changes: 2 additions & 1 deletion async-openai/src/file.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,9 @@ mod tests {
//assert_eq!(openai_file.purpose, "fine-tune");

//assert_eq!(openai_file.status, Some("processed".to_owned())); // uploaded or processed
let query = [("purpose", "fine-tune")];

let list_files = client.files().list().await.unwrap();
let list_files = client.files().list(&query).await.unwrap();

assert_eq!(list_files.data.into_iter().last().unwrap(), openai_file);

Expand Down
2 changes: 1 addition & 1 deletion async-openai/src/types/chat.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
use std::collections::HashMap;

use derive_builder::Builder;
use serde::{Deserialize, Serialize};
use crate::client::OpenAIEventStream;
use serde::{Deserialize, Serialize};

use crate::error::OpenAIError;

Expand Down
2 changes: 1 addition & 1 deletion async-openai/src/types/completion.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
use std::collections::HashMap;

use derive_builder::Builder;
use serde::{Deserialize, Serialize};
use crate::client::OpenAIEventStream;
use serde::{Deserialize, Serialize};

use crate::error::OpenAIError;

Expand Down
4 changes: 4 additions & 0 deletions async-openai/src/types/embedding.rs
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,10 @@ pub struct CreateEmbeddingRequest {
/// to monitor and detect abuse. [Learn more](https://platform.openai.com/docs/usage-policies/end-user-ids).
#[serde(skip_serializing_if = "Option::is_none")]
pub user: Option<String>,

/// The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.
#[serde(skip_serializing_if = "Option::is_none")]
pub dimensions: Option<u32>
}

/// Represents an embedding vector returned by embedding endpoint.
Expand Down
2 changes: 1 addition & 1 deletion async-openai/src/types/fine_tune.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use derive_builder::Builder;
use serde::{Deserialize, Serialize};
use crate::client::OpenAIEventStream;
use serde::{Deserialize, Serialize};

use crate::error::OpenAIError;

Expand Down

0 comments on commit 50d661f

Please sign in to comment.