Skip to content

Commit

Permalink
[Bot] Update API inference documentation (#1489)
Browse files Browse the repository at this point in the history
* Add ci workflow to auto-generate api inference doc

* Update token

* add permissions section

* change pnpm installation gh action

* fix pnpm installation

* change working directory

* revert

* hopefully good

* trying with other versions

* new tests

* pnpm version

* do not update .lock file

* don't update lock file

* fix

* explicit package.json

* fix lock file

* implcit

* explicit

* update pnpm ?

* daily cron job

* update huggingface/tasks before generating docs

* run workflow on this branch to test

* Update API inference documentation (automated)

---------

Co-authored-by: Celina Hanouti <[email protected]>
Co-authored-by: Wauplin <[email protected]>
Co-authored-by: hanouticelina <[email protected]>
  • Loading branch information
4 people authored Nov 18, 2024
1 parent 00462a5 commit 296b2ea
Show file tree
Hide file tree
Showing 5 changed files with 35 additions and 34 deletions.
32 changes: 16 additions & 16 deletions docs/api-inference/tasks/chat-completion.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ curl 'https://api-inference.huggingface.co/models/google/gemma-2-2b-it/v1/chat/c
</curl>

<python>
With huggingface_hub client:
Using `huggingface_hub`:
```py
from huggingface_hub import InferenceClient

Expand All @@ -103,7 +103,7 @@ for chunk in stream:
print(chunk.choices[0].delta.content, end="")
```

With openai client:
Using `openai`:
```py
from openai import OpenAI

Expand Down Expand Up @@ -134,11 +134,11 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
</python>

<js>
With huggingface_hub client:
Using `huggingface.js`:
```js
import { HfInference } from "@huggingface/inference"
import { HfInference } from "@huggingface/inference";

const client = new HfInference("hf_***")
const client = new HfInference("hf_***");

let out = "";

Expand All @@ -162,14 +162,14 @@ for await (const chunk of stream) {
}
```

With openai client:
Using `openai`:
```js
import { OpenAI } from "openai"
import { OpenAI } from "openai";

const client = new OpenAI({
baseURL: "https://api-inference.huggingface.co/v1/",
apiKey: "hf_***"
})
});

let out = "";

Expand Down Expand Up @@ -237,7 +237,7 @@ curl 'https://api-inference.huggingface.co/models/meta-llama/Llama-3.2-11B-Visio
</curl>

<python>
With huggingface_hub client:
Using `huggingface_hub`:
```py
from huggingface_hub import InferenceClient

Expand Down Expand Up @@ -272,7 +272,7 @@ for chunk in stream:
print(chunk.choices[0].delta.content, end="")
```

With openai client:
Using `openai`:
```py
from openai import OpenAI

Expand Down Expand Up @@ -314,11 +314,11 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
</python>

<js>
With huggingface_hub client:
Using `huggingface.js`:
```js
import { HfInference } from "@huggingface/inference"
import { HfInference } from "@huggingface/inference";

const client = new HfInference("hf_***")
const client = new HfInference("hf_***");

let out = "";

Expand Down Expand Up @@ -353,14 +353,14 @@ for await (const chunk of stream) {
}
```

With openai client:
Using `openai`:
```js
import { OpenAI } from "openai"
import { OpenAI } from "openai";

const client = new OpenAI({
baseURL: "https://api-inference.huggingface.co/v1/",
apiKey: "hf_***"
})
});

let out = "";

Expand Down
14 changes: 2 additions & 12 deletions docs/api-inference/tasks/image-text-to-text.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,13 +45,8 @@ curl https://api-inference.huggingface.co/models/meta-llama/Llama-3.2-11B-Vision
</curl>

<python>
With huggingface_hub client:
Using `huggingface_hub`:
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-3.2-11B-Vision-Instruct"
headers = {"Authorization": "Bearer hf_***"}

from huggingface_hub import InferenceClient

client = InferenceClient(api_key="hf_***")
Expand All @@ -69,13 +64,8 @@ for chunk in stream:
print(chunk.choices[0].delta.content, end="")
```

With openai client:
Using `openai`:
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-3.2-11B-Vision-Instruct"
headers = {"Authorization": "Bearer hf_***"}

from openai import OpenAI

client = OpenAI(
Expand Down
11 changes: 11 additions & 0 deletions docs/api-inference/tasks/text-to-image.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,16 @@ curl https://api-inference.huggingface.co/models/black-forest-labs/FLUX.1-dev \
</curl>

<python>
Using `huggingface_hub`:
```py
from huggingface_hub import InferenceClient
client = InferenceClient("black-forest-labs/FLUX.1-dev", token="hf_***")

# output is a PIL.Image object
image = client.text_to_image("Astronaut riding a horse")
```

Using `requests`:
```py
import requests

Expand All @@ -57,6 +67,7 @@ def query(payload):
image_bytes = query({
"inputs": "Astronaut riding a horse",
})

# You can access the image with PIL.Image for example
import io
from PIL import Image
Expand Down
2 changes: 1 addition & 1 deletion scripts/api-inference/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"author": "",
"license": "ISC",
"dependencies": {
"@huggingface/tasks": "^0.12.15",
"@huggingface/tasks": "^0.13.3",
"@types/node": "^22.5.0",
"handlebars": "^4.7.8",
"node": "^20.17.0",
Expand Down
10 changes: 5 additions & 5 deletions scripts/api-inference/pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit 296b2ea

Please sign in to comment.