diff --git a/datasetcard.md b/datasetcard.md index f30fd25fe..8220fc521 100644 --- a/datasetcard.md +++ b/datasetcard.md @@ -26,7 +26,7 @@ size_categories: source_datasets: - {source_dataset_0} # Example: wikipedia - {source_dataset_1} # Example: laion/laion-2b -task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts +task_categories: # Full list at https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts - {task_0} # Example: question-answering - {task_1} # Example: image-classification task_ids: diff --git a/docs/hub/model-cards-user-studies.md b/docs/hub/model-cards-user-studies.md index 422602009..0239ef091 100644 --- a/docs/hub/model-cards-user-studies.md +++ b/docs/hub/model-cards-user-studies.md @@ -11,7 +11,7 @@ We conducted a user study, with the aim of validating a literature informed mode During our examination of the state of the art of model cards, which noted recurring sections from the top ~100 downloaded models on the hub that had model cards. From this analysis we catalogued the top recurring model card sections and recurring information, this coupled with the structure of the Bloom model card, lead us to the initial version of a standard model card structure. -As we began to structure our user studies, two variations of model cards - that made use of the [initial model card structure](http://github.com/huggingface/hub-docs/docs/hub/model-card-annotated.md) - were used as interactive demonstrations. The aim of these demo’s was to understand not only the different user perspectives on the visual elements of the model card’s but also the content presented to users. The {desired} outcome would enable us to further understand what makes a model card both easier to read, still providing some level of interactivity within the model cards, all while presenting the information in an easily understandable [approachable] manner. +As we began to structure our user studies, two variations of model cards - that made use of the [initial model card structure](./model-card-annotated) - were used as interactive demonstrations. The aim of these demo’s was to understand not only the different user perspectives on the visual elements of the model card’s but also the content presented to users. The {desired} outcome would enable us to further understand what makes a model card both easier to read, still providing some level of interactivity within the model cards, all while presenting the information in an easily understandable [approachable] manner. * **Stakeholder Perspectives** diff --git a/docs/hub/models-adding-libraries.md b/docs/hub/models-adding-libraries.md index 3de921c65..e4cc6b377 100644 --- a/docs/hub/models-adding-libraries.md +++ b/docs/hub/models-adding-libraries.md @@ -88,7 +88,7 @@ We recommend adding a code snippet to explain how to use a model in your downstr -Add a code snippet by updating the [Libraries Typescript file](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model: +Add a code snippet by updating the [Libraries Typescript file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model: ```typescript const asteroid = (model: ModelData) => @@ -168,7 +168,7 @@ All third-party libraries are Dockerized, so you can install the dependencies yo } ``` - * For each task your library supports, modify the `app/pipelines/task_name.py` files accordingly. We have also added an `IMPLEMENT_THIS` flag in the pipeline files to guide you. If there isn't a pipeline that supports your task, feel free to add one. Open an [issue](https://github.com/huggingface/hub-docs/issues/new) here, and we will be happy to help you. + * For each task your library supports, modify the `app/pipelines/task_name.py` files accordingly. We have also added an `IMPLEMENT_THIS` flag in the pipeline files to guide you. If there isn't a pipeline that supports your task, feel free to add one. Open an [issue](https://github.com/huggingface/huggingface.js/issues/new) here, and we will be happy to help you. * Add your model and task to the `tests/test_api.py` file. For example, if you have a text generation model: ```python @@ -184,7 +184,7 @@ All third-party libraries are Dockerized, so you can install the dependencies yo ### Register your libraries supported tasks on the hub -To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in this [file](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/LibrariesToTasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [ +To register the tasks supported by your library on the hub you'll need to add a mapping from your library name to its supported tasks in this [file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts). This will ensure the inference API is registered for tasks supported by your model. This file is automatically generated as part of a [GitHub Action](https://github.com/huggingface/api-inference-community/actions/workflows/python-api-export-tasks.yaml) in the [ api-inference-community repository](https://github.com/huggingface/api-inference-community) repository. You can see an example of this [here](https://github.com/huggingface/api-inference-community/actions/runs/5126874210/jobs/9221890853#step:5:8). With these simple but powerful methods, you brought the full functionality of the Hub into your library. Users can download files stored on the Hub from your library with `hf_hub_download`, create repositories with `create_repo`, and upload files with `upload_file`. You also set up Inference API with your library, allowing users to interact with your models on the Hub from inside a browser. \ No newline at end of file diff --git a/docs/hub/models-inference.md b/docs/hub/models-inference.md index 0c52b29af..0f9b26b45 100644 --- a/docs/hub/models-inference.md +++ b/docs/hub/models-inference.md @@ -21,7 +21,7 @@ Specify `inference: false` in your model card's metadata. ## Why don't I see an inference widget or why can't I use the inference API? For some tasks, there might not be support in the inference API, and, hence, there is no widget. -For all libraries (except 🤗 Transformers), there is a [mapping](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/LibrariesToTasks.ts) of library to supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default. +For all libraries (except 🤗 Transformers), there is a [mapping](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of library to supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default. ## Can I send large volumes of requests? Can I get accelerated APIs? diff --git a/docs/hub/models-libraries.md b/docs/hub/models-libraries.md index 1adb5651e..c266bb013 100644 --- a/docs/hub/models-libraries.md +++ b/docs/hub/models-libraries.md @@ -2,7 +2,7 @@ The Hub has support for dozens of libraries in the Open Source ecosystem. Thanks to the `huggingface_hub` Python library, it's easy to enable sharing your models on the Hub. The Hub supports many libraries, and we're working on expanding this support! We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. -The table below summarizes the supported libraries and their level of integration. Find all our supported libraries [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts)! +The table below summarizes the supported libraries and their level of integration. Find all our supported libraries [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts)! | Library | Description | Inference API | Widgets | Download from Hub | Push to Hub | |-----------------------------------------------------------------------------|--------------------------------------------------------------------------------------|---|---:|---|---| diff --git a/docs/hub/models-tasks.md b/docs/hub/models-tasks.md index f38c617cc..7037d067b 100644 --- a/docs/hub/models-tasks.md +++ b/docs/hub/models-tasks.md @@ -69,20 +69,18 @@ The Hub allows users to filter models by a given task. To do this, you need to a 1. Add the task type to `Types.ts` -In [interfaces/Types.ts](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts), you need to do a couple of things +In [huggingface.js/packages/tasks/src/pipelines.ts](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts), you need to do a couple of things * Add the type to `PIPELINE_DATA`. Note that pipeline types are sorted into different categories (NLP, Audio, Computer Vision, and others). -* You will also need to fill minor changes in the following files: - 1. [tasks/src/const.ts](https://github.com/huggingface/hub-docs/blob/main/tasks/src/const.ts) - 2. [tasks/src/tasksData.ts](https://github.com/huggingface/hub-docs/blob/main/tasks/src/tasksData.ts) +* You will also need to fill minor changes in [huggingface.js/packages/tasks/src/tasks/index.ts](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/index.ts) 2. Choose an icon -You can add an icon in the [lib/Icons](https://github.com/huggingface/hub-docs/tree/main/js/src/lib/components/Icons) directory. We usually choose carbon icons from https://icones.js.org/collection/carbon. Also add the icon to [PipelineIcon](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/components/PipelineIcon/PipelineIcon.svelte). +You can add an icon in the [lib/Icons](https://github.com/huggingface/huggingface.js/tree/main/packages/widgets/src/lib/components/Icons) directory. We usually choose carbon icons from https://icones.js.org/collection/carbon. Also add the icon to [PipelineIcon](https://github.com/huggingface/huggingface.js/blob/main/packages/widgets/src/lib/components/PipelineIcon/PipelineIcon.svelte). ### Widget -Once the task is in production, what could be more exciting than implementing some way for users to play directly with the models in their browser? 🤩 You can find all the widgets [here](https://huggingface-widgets.netlify.app/). +Once the task is in production, what could be more exciting than implementing some way for users to play directly with the models in their browser? 🤩 You can find all the widgets [here](https://huggingface.co/spaces/huggingfacejs/inference-widgets). -If you would be interested in contributing with a widget, you can look at the [implementation](https://github.com/huggingface/hub-docs/tree/main/js/src/lib/components/InferenceWidget/widgets) of all the widgets. You can also find WIP documentation on implementing a widget in https://github.com/huggingface/hub-docs/tree/main/js. \ No newline at end of file +If you would be interested in contributing with a widget, you can look at the [implementation](https://github.com/huggingface/huggingface.js/tree/main/packages/widgets/src/lib/components/InferenceWidget/widgets) of all the widgets. diff --git a/docs/hub/models-widgets.md b/docs/hub/models-widgets.md index 250abfbf4..0aca86684 100644 --- a/docs/hub/models-widgets.md +++ b/docs/hub/models-widgets.md @@ -86,7 +86,7 @@ widget: - src: nested/directory/sample1.flac ``` -We provide example inputs for some languages and most widget types in [the DefaultWidget.ts file](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/DefaultWidget.ts). If some examples are missing, we welcome PRs from the community to add them! +We provide example inputs for some languages and most widget types in [the DefaultWidget.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/default-widget-inputs.ts). If some examples are missing, we welcome PRs from the community to add them! ## Example outputs @@ -152,7 +152,7 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo ## What are all the possible task/widget types? -You can find all the supported tasks [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts). +You can find all the supported tasks [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts). Here are some links to examples: diff --git a/modelcard.md b/modelcard.md index 2a5ea16ed..1ad2cea4f 100644 --- a/modelcard.md +++ b/modelcard.md @@ -7,7 +7,7 @@ language: license: {license} # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses license_name: {license_name} # If license = other (license not in https://hf.co/docs/hub/repositories-licenses), specify an id for it here, like `my-license-1.0`. license_link: {license_link} # If license = other, specify "README" to link to that file inside the repo, or a URL to a remote file. -library_name: {library_name} # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts +library_name: {library_name} # Optional. Example: keras or any library from https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts tags: - {tag_0} # Example: audio - {tag_1} # Example: automatic-speech-recognition diff --git a/tasks-contribution-guide.md b/tasks-contribution-guide.md index 4a26500e1..7d8d6023b 100644 --- a/tasks-contribution-guide.md +++ b/tasks-contribution-guide.md @@ -1,20 +1,3 @@ ## Contributing to Tasks -Welcome to the contribution guide to [Hugging Face Tasks](https://huggingface.co/tasks) and thank you for considering contributing to the community! - -### Philosophy behind Tasks - -The Task pages are made to lower the barrier of entry to understand a task that can be solved with machine learning and use or train a model to accomplish it. It's a collaborative documentation effort made to help out software developers, social scientists, or anyone with no background in machine learning that is interested in understanding how machine learning models can be used to solve a problem. - -The task pages avoid jargon to let everyone understand the documentation, and if specific terminology is needed, it is explained on the most basic level possible. This is important to understand before contributing to Tasks: at the end of every task page, the user is expected to be able to find and pull a model from the Hub and use it on their data and see if it works for their use case to come up with a proof of concept. - -### How to Contribute -You can open a pull request to [hub-docs repository](https://github.com/huggingface/hub-docs) to contribute a new documentation about a new task. Under `tasks/src` we have a folder for every task that contains two files, `about.md` and `data.ts`. `about.md` contains the markdown part of the page, use cases, resources and minimal code block to infer a model that belongs to the task. `data.ts` contains redirections to canonical models and datasets, metrics, the schema of the task and the information the inference widget needs. - -![Anatomy of a Task Page](tasks/assets/contribution-guide/anatomy.png) - -We have `tasks/assets` that contains data used in the inference widget and images used in the markdown file. The last file is `const.ts`, which has the task to library mapping (e.g. spacy to token-classification) where you can add a library. They will look in the top right corner like below. - -![Libraries of a Task](tasks/assets/contribution-guide/libraries.png) - -This might seem overwhelming, but you don't necessarily need to add all of these in one pull request or on your own, you can simply contribute one section. Feel free to ask for help whenever you need. \ No newline at end of file +Please refer to [this page](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/README.md). \ No newline at end of file diff --git a/tasks/src/text-to-image/about.md b/tasks/src/text-to-image/about.md index 1b898bec8..6d442bbb6 100644 --- a/tasks/src/text-to-image/about.md +++ b/tasks/src/text-to-image/about.md @@ -17,7 +17,7 @@ Architects can utilise the models to construct an environment based out on the r ## Task Variants -You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/text-to-image/about.md). +You can contribute variants of this task [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/text-to-image/about.md). ## Inference diff --git a/tasks/src/zero-shot-image-classification/about.md b/tasks/src/zero-shot-image-classification/about.md index 7631c4f5f..5ebcd05d6 100644 --- a/tasks/src/zero-shot-image-classification/about.md +++ b/tasks/src/zero-shot-image-classification/about.md @@ -22,7 +22,7 @@ Action recognition is the task of identifying when a person in an image/video is ## Task Variants -You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md). +You can contribute variants of this task [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/zero-shot-image-classification/about.md). ## Inference @@ -59,7 +59,7 @@ The highest probability is 0.995 for the label cat and dog ## Useful Resources -You can contribute useful resources about this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md). +You can contribute useful resources about this task [here](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/zero-shot-image-classification/about.md). Check out [Zero-shot image classification task guide](https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification).