Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference API] Add image-text-to-text task and fix generate script #1440

Merged
merged 6 commits into from
Oct 16, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/api-inference/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@
title: Image Segmentation
- local: tasks/image-to-image
title: Image to Image
- local: tasks/image-text-to-text
title: Image-Text to Text
- local: tasks/object-detection
title: Object Detection
- local: tasks/question-answering
Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/audio-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ curl https://api-inference.huggingface.co/models/<REPO_ID> \
-X POST \
--data-binary '@sample1.flac' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
4 changes: 2 additions & 2 deletions docs/api-inference/tasks/automatic-speech-recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ For more details about the `automatic-speech-recognition` task, check out its [d
### Recommended models

- [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3): A powerful ASR model by OpenAI.
- [facebook/seamless-m4t-v2-large](https://huggingface.co/facebook/seamless-m4t-v2-large): An end-to-end model that performs ASR and Speech Translation by MetaAI.
- [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1): Powerful speaker diarization model.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
Expand All @@ -45,7 +46,6 @@ curl https://api-inference.huggingface.co/models/openai/whisper-large-v3 \
-X POST \
--data-binary '@sample1.flac' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down Expand Up @@ -108,7 +108,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
| **parameters** | _object_ | Additional inference parameters for Automatic Speech Recognition |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return_timestamps** | _boolean_ | Whether to output corresponding timestamps with the generated text |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generate** | _object_ | Ad-hoc parametrization of the text generation process |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generation_parameters** | _object_ | Ad-hoc parametrization of the text generation process |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;temperature** | _number_ | The value used to modulate the next token probabilities. |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | The number of highest probability vocabulary tokens to keep for top-k-filtering. |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_p** | _number_ | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. |
Expand Down
109 changes: 97 additions & 12 deletions docs/api-inference/tasks/chat-completion.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,20 +14,20 @@ For more details, check out:

## Chat Completion

Generate a response given a list of messages.
This is a subtask of [`text-generation`](./text_generation) designed to generate responses in a conversational context.


Generate a response given a list of messages in a conversational context, supporting both conversational Language Models (LLMs) and conversational Vision-Language Models (VLMs).
This is a subtask of [`text-generation`](./text_generation) and [`image-text-to-text`](./image_text_to_text).

### Recommended models

#### Conversational Large Language Models (LLMs)
- [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it): A text-generation model trained to follow instructions.
- [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions.
- [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct): Small yet powerful text generation model.
- [HuggingFaceH4/starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1): Strong coding assistant model.
- [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407): Very strong open-source large language model.


#### Conversational Vision-Language Models (VLMs)
- [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct): Strong image-text-to-text model.

### Using the API

Expand All @@ -37,6 +37,8 @@ The API supports:
* Using grammars, constraints, and tools.
* Streaming the output

#### Code snippet example for conversational LLMs


<inferencesnippet>

Expand All @@ -59,21 +61,18 @@ curl 'https://api-inference.huggingface.co/models/google/gemma-2-2b-it/v1/chat/c
```py
from huggingface_hub import InferenceClient

client = InferenceClient(
"google/gemma-2-2b-it",
token="hf_***",
)
client = InferenceClient(api_key="hf_***")

for message in client.chat_completion(
model="google/gemma-2-2b-it",
messages=[{"role": "user", "content": "What is the capital of France?"}],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")

```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.conversational_text-generation).
hanouticelina marked this conversation as resolved.
Show resolved Hide resolved
</python>

<js>
Expand All @@ -89,10 +88,96 @@ for await (const chunk of inference.chatCompletionStream({
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#conversationaltext-generation).
</js>

</inferencesnippet>



#### Code snippet example for conversational VLMs


<inferencesnippet>

<curl>
```bash
curl 'https://api-inference.huggingface.co/models/microsoft/Phi-3.5-vision-instruct/v1/chat/completions' \
-H "Authorization: Bearer hf_***" \
-H 'Content-Type: application/json' \
-d '{
"model": "microsoft/Phi-3.5-vision-instruct",
"messages": [
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"}},
{"type": "text", "text": "Describe this image in one sentence."}
]
}
],
"max_tokens": 500,
"stream": false
}'

```
</curl>

<python>
```py
from huggingface_hub import InferenceClient

client = InferenceClient(api_key="hf_***")

image_url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

for message in client.chat_completion(
model="microsoft/Phi-3.5-vision-instruct",
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": "Describe this image in one sentence."},
],
}
],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")
```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.conversational_image-text-to-text).
</python>

<js>
```js
import { HfInference } from "@huggingface/inference";

const inference = new HfInference("hf_***");
const imageUrl = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg";

for await (const chunk of inference.chatCompletionStream({
model: "microsoft/Phi-3.5-vision-instruct",
messages: [
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": imageUrl}},
{"type": "text", "text": "Describe this image in one sentence."},
],
}
],
max_tokens: 500,
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#conversationalimage-text-to-text).
hanouticelina marked this conversation as resolved.
Show resolved Hide resolved
</js>

</inferencesnippet>
Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/feature-extraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ curl https://api-inference.huggingface.co/models/thenlper/gte-large \
-d '{"inputs": "Today is a sunny day and I will get some ice cream."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/fill-mask.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ curl https://api-inference.huggingface.co/models/google-bert/bert-base-uncased \
-d '{"inputs": "The answer to the universe is [MASK]."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/image-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ curl https://api-inference.huggingface.co/models/google/vit-base-patch16-224 \
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/image-segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ curl https://api-inference.huggingface.co/models/nvidia/segformer-b0-finetuned-a
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
114 changes: 114 additions & 0 deletions docs/api-inference/tasks/image-text-to-text.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
<!---
This markdown file has been generated from a script. Please do not edit it directly.
For more details, check out:
- the `generate.ts` script: https://github.com/huggingface/hub-docs/blob/main/scripts/api-inference/scripts/generate.ts
- the task template defining the sections in the page: https://github.com/huggingface/hub-docs/tree/main/scripts/api-inference/templates/task/image-text-to-text.handlebars
- the input jsonschema specifications used to generate the input markdown table: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/image-text-to-text/spec/input.json
- the output jsonschema specifications used to generate the output markdown table: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/image-text-to-text/spec/output.json
- the snippets used to generate the example:
- curl: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/curl.ts
- python: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/python.ts
- javascript: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/js.ts
- the "tasks" content for recommended models: https://huggingface.co/api/tasks
--->

## Image-Text to Text

Image-text-to-text models take in an image and text prompt and output text. These models are also called vision-language models, or VLMs. The difference from image-to-text models is that these models take an additional text input, not restricting the model to certain use cases like image captioning, and may also be trained to accept a conversation as input.

<Tip>

For more details about the `image-text-to-text` task, check out its [dedicated page](https://huggingface.co/tasks/image-text-to-text)! You will find examples and related materials.

</Tip>

### Recommended models

- [HuggingFaceM4/idefics2-8b-chatty](https://huggingface.co/HuggingFaceM4/idefics2-8b-chatty): Cutting-edge conversational vision language model that can take multiple image inputs.
- [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct): Strong image-text-to-text model.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-text-to-text&sort=trending).

### Using the API


<inferencesnippet>

<curl>
```bash
curl https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty \
-X POST \
-d '{"inputs": No input example has been defined for this model task.}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"
```
</curl>

<python>
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty"
headers = {"Authorization": "Bearer hf_***"}

from huggingface_hub import InferenceClient
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not expected (i.e. having import requests ... before from huggingface_hub import InferenceClient). I realized that https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct?inference_api=true has a problem. Model doesn't have a chat template and therefore is not tagged as "conversational" which creates this weird side effect.

So I see 3 independent things to correct here:

  1. it would be nice to recommend meta-llama/Llama-3.2-11B-Vision-Instruct first on the image-text-to-image task page (to update here)
  2. we should fix in moon-landing the "conversational" detection. At the moment, it's based only on the presence of a chat template. However for idefics chatty 8b it seems it's using "use_default_system_prompt": true instead. @Rocketknight1 is it safe to assume that a model with no chat template but this parameter set to True is in fact a conversational model? And if not, which parameter could we check?
  3. For non-conversational image-text-to-text (does that even exist?), we should fix the snippet generator so that only the requests-based snippet is displayed instead of this weird combination.

cc @osanseviero as well for viz'

Copy link
Contributor Author

@hanouticelina hanouticelina Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to add, HuggingFaceM4/idefics2-8b-chatty has thechat_template defined in the processor_config.json. The tokenizer.chat_template attribute is supposed to be saved in tokenizer_config.json file. I guess the template was set using transformers.ProcessorMixin instead.

Copy link
Contributor Author

@hanouticelina hanouticelina Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the 3rd point, pinging @mishig25 since it's related to huggingface.js/pull/938. do you think it's okay to map image-text-to-text to snippetBasic instead and define the task input here ?


client = InferenceClient(api_key="hf_***")

image_url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

for message in client.chat_completion(
model="HuggingFaceM4/idefics2-8b-chatty",
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": "Describe this image in one sentence."},
],
}
],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")
```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.image_text-to-text).
</python>

<js>
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty",
{
headers: {
Authorization: "Bearer hf_***"
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.json();
return result;
}

query({"inputs": No input example has been defined for this model task.}).then((response) => {
hanouticelina marked this conversation as resolved.
Show resolved Hide resolved
console.log(JSON.stringify(response));
});
```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#imagetext-to-text).
</js>

</inferencesnippet>



### API specification

For the API specification of conversational image-text-to-text models, please refer to the [Chat Completion API documentation](https://huggingface.co/docs/api-inference/tasks/chat-completion#api-specification).


1 change: 0 additions & 1 deletion docs/api-inference/tasks/object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ curl https://api-inference.huggingface.co/models/facebook/detr-resnet-50 \
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
2 changes: 1 addition & 1 deletion docs/api-inference/tasks/question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ For more details about the `question-answering` task, check out its [dedicated p

- [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2): A robust baseline model for most question answering domains.
- [distilbert/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad): Small yet robust model that can answer questions.
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A special model that can answer questions from tables.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=question-answering&sort=trending).

Expand All @@ -41,7 +42,6 @@ curl https://api-inference.huggingface.co/models/deepset/roberta-base-squad2 \
-d '{"inputs": { "question": "What is my name?", "context": "My name is Clara and I live in Berkeley." }}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/summarization.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ curl https://api-inference.huggingface.co/models/facebook/bart-large-cnn \
-d '{"inputs": "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
Loading
Loading