Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bot] Update API inference documentation #1513

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/api-inference/tasks/question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ For more details about the `question-answering` task, check out its [dedicated p

- [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2): A robust baseline model for most question answering domains.
- [distilbert/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad): Small yet robust model that can answer questions.
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A special model that can answer questions from tables.

Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=question-answering&sort=trending).

Expand Down
7 changes: 3 additions & 4 deletions docs/api-inference/tasks/table-question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ For more details about the `table-question-answering` task, check out its [dedic

### Recommended models

- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A robust table question answering model.

Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=table-question-answering&sort=trending).

Expand All @@ -35,7 +34,7 @@ Explore all available models and find the one that suits you best [here](https:/

<curl>
```bash
curl https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq \
curl https://api-inference.huggingface.co/models/<REPO_ID> \
-X POST \
-d '{"inputs": { "query": "How many stars does the transformers repository have?", "table": { "Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"], "Contributors": ["651", "77", "34"], "Programming language": [ "Python", "Python", "Rust, Python and NodeJS" ] } }}' \
-H 'Content-Type: application/json' \
Expand All @@ -47,7 +46,7 @@ curl https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq"
API_URL = "https://api-inference.huggingface.co/models/<REPO_ID>"
headers = {"Authorization": "Bearer hf_***"}

def query(payload):
Expand Down Expand Up @@ -78,7 +77,7 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq",
"https://api-inference.huggingface.co/models/<REPO_ID>",
{
headers: {
Authorization: "Bearer hf_***",
Expand Down
Loading