Skip to content

Commit

Permalink
Add documentation to be hosted in hf.co/docs (#67)
Browse files Browse the repository at this point in the history
* Add `docs/source` initial structure (WIP)

* Add `docs/source/index.mdx`

* Add `.github/workflows/doc-*.yml`

* Apply suggestions from code review

Co-authored-by: Jeff Boudier <[email protected]>
Co-authored-by: pagezyhf <[email protected]>

* Update `docs/source/index.mdx`

* Update `docs/source/_toctree.yml` to match title

* Add more links to `docs/source/index.mdx`

* Update `thumbnail.png`

* Fix URL to updated `thumbnail.png`

* Apply suggestions from code review

Co-authored-by: Mishig <[email protected]>

* Update `docs/source/index.mdx`

---------

Co-authored-by: Jeff Boudier <[email protected]>
Co-authored-by: pagezyhf <[email protected]>
Co-authored-by: Mishig <[email protected]>
  • Loading branch information
4 people authored Sep 5, 2024
1 parent 783ab33 commit 045c1a7
Show file tree
Hide file tree
Showing 5 changed files with 174 additions and 0 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/doc-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Build Documentation

on:
push:
branches:
- main
- doc-builder*
paths:
- docs/source/**
- .github/workflows/doc-build.yml

jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
commit_sha: ${{ github.sha }}
package: Google-Cloud-Containers
package_name: google-cloud
additional_args: --not_python_module
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
21 changes: 21 additions & 0 deletions .github/workflows/doc-pr-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Build PR Documentation

on:
pull_request:
paths:
- docs/source/**
- .github/workflows/doc-pr-build.yml

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
with:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: Google-Cloud-Containers
package_name: google-cloud
additional_args: --not_python_module
16 changes: 16 additions & 0 deletions .github/workflows/doc-pr-upload.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Upload PR Documentation

on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed

jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: google-cloud
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
4 changes: 4 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
- sections:
- local: index
title: Hugging Face on Google Cloud
title: Getting Started
111 changes: 111 additions & 0 deletions docs/source/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Hugging Face on Google Cloud

![Hugging Face x Google Cloud](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/Google-Cloud-Containers/thumbnail.png)

Hugging Face collaborates with Google across open science, open source, cloud, and hardware to enable companies to build their own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.

Hugging Face enables new experiences for Google Cloud customers. They can easily train and deploy Hugging Face models on Google Kubernetes Engine (GKE) and Vertex AI, on any hardware available in Google Cloud using Hugging Face Deep Learning Containers (DLCs).

## Train and Deploy Models on Google Cloud with Hugging Face Deep Learning Containers

Hugging Face built Deep Learning Containers (DLCs) for Google Cloud customers to run any of their machine learning workload in an optimized environment, with no configuration or maintenance on their part. These are Docker images pre-installed with deep learning frameworks and libraries such as 🤗 Transformers, 🤗 Datasets, and 🤗 Tokenizers. The DLCs allow you to directly serve and train any models, skipping the complicated process of building and optimizing your serving and training environments from scratch.

For training, our DLCs are available for PyTorch via 🤗 Transformers. They include support for training on both GPUs and TPUs with libraries such as 🤗 TRL, Sentence Transformers, or 🧨 Diffusers.

For inference, we have a general-purpose PyTorch inference DLC, for serving models trained with any of those frameworks mentioned before on both CPU and GPU. There is also the Text Generation Inference (TGI) DLC for high-performance text generation of LLMs on both GPU and TPU. Finally, there is a Text Embeddings Inference (TEI) DLC for high-performance serving of embedding models on both CPU and GPU.

The DLCs are hosted in [Google Cloud Artifact Registry](https://console.cloud.google.com/artifacts/docker/deeplearning-platform-release/us/gcr.io) and can be used from any Google Cloud service such as Google Kubernetes Engine (GKE), Vertex AI, or Cloud Run (in preview).

Hugging Face DLCs are open source and licensed under Apache 2.0 within the [Google-Cloud-Containers](https://github.com/huggingface/Google-Cloud-Containers) repository. For premium support, our [Expert Support Program](https://huggingface.co/support) gives you direct dedicated support from our team.

You have two options to take advantage of these DLCs as a Google Cloud customer:

1. To [get started](https://huggingface.co/blog/google-cloud-model-garden), you can use our no-code integrations within Vertex AI or GKE.
2. For more advanced scenarios, you can pull the containers from the Google Cloud Artifact Registry directly in your environment. [Here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples) is a list of notebooks examples.

## Features & benefits 🔥

The Hugging Face DLCs provide ready-to-use, tested environments to train and deploy Hugging Face models. They can be used in combination with Google Cloud offerings including Google Kubernetes Engine (GKE) and Vertex AI. GKE is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using Google Cloud's infrastructure. Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize Large Language Models (LLMs).

### One command is all you need

With the new Hugging Face DLCs, train cutting-edge Transformers-based NLP models in a single line of code. The Hugging Face PyTorch DLCs for training come with all the libraries installed to run a single command e.g. via TRL CLI to fine-tune LLMs on any setting, either single-GPU, single-node multi-GPU, and more.

### Accelerate machine learning from science to production

In addition to Hugging Face DLCs, we created a first-class Hugging Face library for inference, [`huggingface-inference-toolkit`](https://github.com/huggingface/huggingface-inference-toolkit), that comes with the Hugging Face PyTorch DLCs for inference, with full support on serving any PyTorch model on Google Cloud.

Deploy your trained models for inference with just one more line of code or select [any of the 170,000+ publicly available models from the model Hub](https://huggingface.co/models?library=pytorch,transformers&sort=trending) and deploy them on either Vertex AI or GKE.

### High-performance text generation and embedding

Besides the PyTorch-oriented DLCs, Hugging Face also provides high-performance inference for both text generation and embedding models via the Hugging Face DLCs for both [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) and [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference), respectively.

The Hugging Face DLC for TGI enables you to deploy [any of the +140,000 text generation inference supported models from the Hugging Face Hub](https://huggingface.co/models?other=text-generation-inference&sort=trending), or any custom model as long as [its architecture is supported within TGI](https://huggingface.co/docs/text-generation-inference/supported_models).

The Hugging Face DLC for TEI enables you to deploy [any of the +10,000 embedding, re-ranking or sequence classification supported models from the Hugging Face Hub](https://huggingface.co/models?other=text-embeddings-inference&sort=trending), or any custom model as long as [its architecture is supported within TEI](https://huggingface.co/docs/text-embeddings-inference/en/supported_models).

Additionally, these DLCs come with full support for Google Cloud meaning that deploying models from Google Cloud Storage (GCS) is also straight forward and requires no configuration.

### Built-in performance

Hugging Face DLCs feature built-in performance optimizations for PyTorch to train models faster. The DLCs also give you the flexibility to choose a training infrastructure that best aligns with the price/performance ratio for your workload.

The Hugging Face Training DLCs are fully integrated with Google Cloud, enabling the use of [the latest generation of instances available on Google Cloud Compute Engine](https://cloud.google.com/products/compute?hl=en).

Hugging Face Inference DLCs provide you with production-ready endpoints that scale quickly with your Google Cloud environment, built-in monitoring, and a ton of enterprise features.

---

Read more about both Vertex AI in [their official documentation](https://cloud.google.com/vertex-ai/docs) and GKE in [their official documentation](https://cloud.google.com/kubernetes-engine/docs).

## Resources, Documentation & Examples 📄

Learn how to use Hugging Face in Google Cloud by reading our blog posts, documentation and examples below.

### Blog posts

- [Hugging Face and Google partner for open AI collaboration](https://huggingface.co/blog/gcp-partnership)
- [Google Cloud TPUs made available to Hugging Face users](https://huggingface.co/blog/tpu-inference-endpoints-spaces)
- [Making thousands of open LLMs bloom in the Vertex AI Model Garden](https://huggingface.co/blog/google-cloud-model-garden)
- [Deploy Meta Llama 3.1 405B on Google Cloud Vertex AI](https://huggingface.co/blog/llama31-on-vertex-ai)

### Documentation

- [Google Cloud Hugging Face Deep Learning Containers](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face)
- [Google Cloud public Artifact Registry for DLCs](https://console.cloud.google.com/artifacts/docker/deeplearning-platform-release/us/gcr.io)
- [Serve Gemma open models using GPUs on GKE with Hugging Face TGI](https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi)
- [Generative AI on Vertex - Use Hugging Face text generation models](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-hugging-face-models)

### Examples

- [All examples](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples)

#### GKE

- Training

- [Full SFT fine-tuning of Gemma 2B in a multi-GPU instance with TRL on GKE](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/gke/trl-full-fine-tuning)
- [LoRA SFT fine-tuning of Mistral 7B v0.3 in a single GPU instance with TRL on GKE](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/gke/trl-lora-fine-tuning)

- Inference

- [Deploying Llama3 8B with Text Generation Inference (TGI) on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment)
- [Deploying Qwen2 7B Instruct with Text Generation Inference (TGI) from a GCS Bucket on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment)
- [Deploying Snowflake's Arctic Embed (M) with Text Embeddings Inference (TEI) on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment)
- [Deploying BGE Base v1.5 (English) with Text Embeddings Inference (TEI) from a GCS Bucket on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment)

#### Vertex AI

- Training

- [Full SFT fine-tuning of Mistral 7B v0.3 in a multi-GPU instance with TRL on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai)
- [LoRA SFT fine-tuning of Mistral 7B v0.3 in a single GPU instance with TRL on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai)

- Inference

- [Deploying a BERT model for a text classification task using huggingface-inference-toolkit for a Custom Prediction Routine (CPR) on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai)
- [Deploying an embedding model with Text Embeddings Inference (TEI) on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai)
- [Deploying Gemma 7B Instruct with Text Generation Inference (TGI) on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai)
- [Deploying Gemma 7B Instruct with Text Generation Inference (TGI) from a GCS Bucket on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai)
- [Deploying FLUX with Hugging Face PyTorch DLCs for Inference on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai)

0 comments on commit 045c1a7

Please sign in to comment.