Skip to content

Commit

Permalink
Standardize references to file (#1136)
Browse files Browse the repository at this point in the history
* Standardize references to file

* chore

* chore
  • Loading branch information
mishig25 authored Nov 30, 2023
1 parent fe4166c commit 8c89a5d
Show file tree
Hide file tree
Showing 13 changed files with 22 additions and 22 deletions.
2 changes: 1 addition & 1 deletion datasetcard.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,4 +110,4 @@ train-eval-index:

Valid license identifiers can be found in [our docs](https://huggingface.co/docs/hub/repositories-licenses).

For the full dataset card template, see: [https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md).
For the full dataset card template, see: [datasetcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md).
2 changes: 1 addition & 1 deletion docs/hub/datasets-cards.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ When creating a README.md file in a dataset repository on the Hub, use Metadata
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-metadata-ui-dark.png"/>
</div>
To see metadata fields, see the detailed dataset card metadata specification [here](https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1).
To see metadata fields, see the detailed [Dataset Card specifications](https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1).
### Dataset card creation guide
Expand Down
2 changes: 1 addition & 1 deletion docs/hub/model-card-annotated.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

## Template

[https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
[modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)


## Directions
Expand Down
2 changes: 1 addition & 1 deletion docs/hub/model-card-guidebook.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Our work presents a view of where we think model cards stand right now and where

With the launch of this Guidebook, we introduce several new resources and connect together previous work on Model Cards:

1) An updated Model Card template, released in [the `huggingface_hub` library](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), drawing together Model Card work in academia and throughout the industry.
1) An updated Model Card template, released in the `huggingface_hub` library [modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), drawing together Model Card work in academia and throughout the industry.

2) An [Annotated Model Card Template](./model-card-annotated), which details how to fill the card out.

Expand Down
2 changes: 1 addition & 1 deletion docs/hub/model-cards.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ If the license is not available via a URL you can link to a LICENSE stored in th

### Evaluation Results

You can even specify your **model's eval results** in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data [in the metadata spec](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
You can even specify your **model's eval results** in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data in [Model Card specifications](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).

Here is a partial example (omitting the eval results part):
```yaml
Expand Down
4 changes: 2 additions & 2 deletions docs/hub/models-adding-libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ We recommend adding a code snippet to explain how to use a model in your downstr
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/code_snippet-dark.png"/>
</div>

Add a code snippet by updating the [Libraries Typescript file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model:
Add a code snippet by updating the [model-libraries.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model:

```typescript
const asteroid = (model: ModelData) =>
Expand Down Expand Up @@ -184,7 +184,7 @@ All third-party libraries are Dockerized, so you can install the dependencies yo

### Register your libraries supported tasks on the hub

To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in this [file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [
To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in [library-to-tasks.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [
api-inference-community repository](https://github.com/huggingface/api-inference-community) repository. You can see an example of this [here](https://github.com/huggingface/api-inference-community/actions/runs/5126874210/jobs/9221890853#step:5:8).

With these simple but powerful methods, you brought the full functionality of the Hub into your library. Users can download files stored on the Hub from your library with `hf_hub_download`, create repositories with `create_repo`, and upload files with `upload_file`. You also set up Inference API with your library, allowing users to interact with your models on the Hub from inside a browser.
2 changes: 1 addition & 1 deletion docs/hub/models-inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Specify `inference: false` in your model card's metadata.
## Why don't I see an inference widget or why can't I use the inference API?

For some tasks, there might not be support in the inference API, and, hence, there is no widget.
For all libraries (except πŸ€— Transformers), there is a [mapping](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of library to supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default.
For all libraries (except πŸ€— Transformers), there is a [library-to-tasks.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default.


## Can I send large volumes of requests? Can I get accelerated APIs?
Expand Down
2 changes: 1 addition & 1 deletion docs/hub/models-libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The Hub has support for dozens of libraries in the Open Source ecosystem. Thanks to the `huggingface_hub` Python library, it's easy to enable sharing your models on the Hub. The Hub supports many libraries, and we're working on expanding this support! We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward.

The table below summarizes the supported libraries and their level of integration. Find all our supported libraries [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts)!
The table below summarizes the supported libraries and their level of integration. Find all our supported libraries in [model-libraries.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts)!

| Library | Description | Inference API | Widgets | Download from Hub | Push to Hub |
|-----------------------------------------------------------------------------|--------------------------------------------------------------------------------------|---|---:|---|---|
Expand Down
4 changes: 2 additions & 2 deletions docs/hub/models-widgets.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ widget:
- src: nested/directory/sample1.flac
```

We provide example inputs for some languages and most widget types in [the default-widget-inputs.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/default-widget-inputs.ts). If some examples are missing, we welcome PRs from the community to add them!
We provide example inputs for some languages and most widget types in [default-widget-inputs.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/default-widget-inputs.ts). If some examples are missing, we welcome PRs from the community to add them!

## Example outputs

Expand Down Expand Up @@ -152,7 +152,7 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo

## What are all the possible task/widget types?

You can find all the supported tasks [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts).
You can find all the supported tasks in [pipelines.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts).

Here are some links to examples:

Expand Down
4 changes: 2 additions & 2 deletions docs/sagemaker/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The get started guide will show you how to quickly use Hugging Face on Amazon Sa

<iframe width="560" height="315" src="https://www.youtube.com/embed/pYqjCzoyWyo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) to follow along!
πŸ““ Open the [agemaker-notebook.ipynb file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) to follow along!

## Installation and setup

Expand Down Expand Up @@ -90,7 +90,7 @@ test_dataset.save_to_disk(test_input_path)

Create a Hugging Face Estimator to handle end-to-end SageMaker training and deployment. The most important parameters to pay attention to are:

* `entry_point` refers to the fine-tuning script which you can find [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).
* `entry_point` refers to the fine-tuning script which you can find in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).
* `instance_type` refers to the SageMaker instance that will be launched. Take a look [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types.
* `hyperparameters` refers to the training hyperparameters the model will be fine-tuned with.

Expand Down
6 changes: 3 additions & 3 deletions docs/sagemaker/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ There are two ways to deploy your Hugging Face model trained in SageMaker:
- Deploy it after your training has finished.
- Deploy your saved model at a later time from S3 with the `model_data`.

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) for an example of how to deploy a model from S3 to SageMaker for inference.
πŸ““ Open the [deploy_transformer_model_from_s3.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) for an example of how to deploy a model from S3 to SageMaker for inference.

### Deploy after training

Expand Down Expand Up @@ -243,7 +243,7 @@ After you run our request, you can delete the endpoint again with:
predictor.delete_endpoint()
```

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) for an example of how to deploy a model from the πŸ€— Hub to SageMaker for inference.
πŸ““ Open the [deploy_transformer_model_from_hf_hub.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) for an example of how to deploy a model from the πŸ€— Hub to SageMaker for inference.

## Run batch transform with πŸ€— Transformers and SageMaker

Expand Down Expand Up @@ -316,7 +316,7 @@ The `input.jsonl` looks like this:
{"inputs":"this movie is amazing"}
```

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb) for an example of how to run a batch transform job for inference.
πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb) for an example of how to run a batch transform job for inference.

## User defined code and modules

Expand Down
10 changes: 5 additions & 5 deletions docs/sagemaker/train.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ if __name__ == "__main__":

_Note that SageMaker doesn’t support argparse actions. For example, if you want to use a boolean hyperparameter, specify `type` as `bool` in your script and provide an explicit `True` or `False` value._

Look [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py) for a complete example of a πŸ€— Transformers training script.
Look [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py) for a complete example of a πŸ€— Transformers training script.

## Training Output Management

Expand All @@ -109,7 +109,7 @@ Run πŸ€— Transformers training scripts on SageMaker by creating a [Hugging Face

1. `entry_point` specifies which fine-tuning script to use.
2. `instance_type` specifies an Amazon instance to launch. Refer [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types.
3. `hyperparameters` specifies training hyperparameters. View additional available hyperparameters [here](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).
3. `hyperparameters` specifies training hyperparameters. View additional available hyperparameters in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).

The following code sample shows how to train with a custom script `train.py` with three hyperparameters (`epochs`, `per_device_train_batch_size`, and `model_name_or_path`):

Expand Down Expand Up @@ -202,7 +202,7 @@ huggingface_estimator = HuggingFace(
)
```

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) for an example of how to run the data parallelism library with TensorFlow.
πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) for an example of how to run the data parallelism library with TensorFlow.

### Model parallelism

Expand Down Expand Up @@ -247,7 +247,7 @@ huggingface_estimator = HuggingFace(
)
```

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) for an example of how to run the model parallelism library.
πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) for an example of how to run the model parallelism library.

## Spot instances

Expand Down Expand Up @@ -288,7 +288,7 @@ huggingface_estimator = HuggingFace(
# Managed Spot Training savings: 70.0%
```

πŸ““ Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) for an example of how to use spot instances.
πŸ““ Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) for an example of how to use spot instances.

## Git repository

Expand Down
2 changes: 1 addition & 1 deletion modelcard.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,4 @@ model-index:
This markdown file contains the spec for the modelcard metadata regarding evaluation parameters. When present, and only then, 'model-index', 'datasets' and 'license' contents will be verified when git pushing changes to your README.md file.
Valid license identifiers can be found in [our docs](https://huggingface.co/docs/hub/repositories-licenses).

For the full model card template, see: [https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md).
For the full model card template, see: [modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md).

0 comments on commit 8c89a5d

Please sign in to comment.