From 339854a9a4673c7d9cd718caa0847b812f0e9def Mon Sep 17 00:00:00 2001 From: Xueyao Zhang Date: Fri, 8 Dec 2023 00:28:41 +0800 Subject: [PATCH] Update the 'Frameworks using Accelerate' section to include Amphion (#2225) * Extend the frameworks using accelerate to include Amphion * Update integration examples to include Amphion * fix some typos --- README.md | 1 + docs/source/usage_guides/training_zoo.md | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 11f9178b576..46583675dd9 100644 --- a/README.md +++ b/README.md @@ -220,6 +220,7 @@ You shouldn't use 🤗 Accelerate if you don't want to write a training loop you If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below: +* [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. * [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76). * [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic. * [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms. diff --git a/docs/source/usage_guides/training_zoo.md b/docs/source/usage_guides/training_zoo.md index 2a7f51d2873..ab7cc072d12 100644 --- a/docs/source/usage_guides/training_zoo.md +++ b/docs/source/usage_guides/training_zoo.md @@ -72,6 +72,11 @@ These are tutorials from libraries that integrate with 🤗 Accelerate: > Don't find your integration here? Make a PR to include it! +### Amphion +- [Training Text-to-Speech Models with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/tts/README.md) +- [Training Singing Voice Conversion Models with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/svc/README.md) +- [Training Vocoders with Amphion](https://github.com/open-mmlab/Amphion/blob/main/egs/vocoder/README.md) + ### Catalyst - [Distributed training tutorial with Catalyst](https://catalyst-team.github.io/catalyst/tutorials/ddp.html) @@ -172,4 +177,4 @@ Below contains a non-exhaustive list of papers utilizing 🤗 Accelerate. * Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig: “Execution-Based Evaluation for Open-Domain Code Generation”, 2022; [arXiv:2212.10481](http://arxiv.org/abs/2212.10481). * Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang: “Expeditious Saliency-guided Mix-up through Random Gradient Thresholding”, 2022; [arXiv:2212.04875](http://arxiv.org/abs/2212.04875). * Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng: “MagicMix: Semantic Mixing with Diffusion Models”, 2022; [arXiv:2210.16056](http://arxiv.org/abs/2210.16056). -* Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners”, 2021; [arXiv:2110.06274](http://arxiv.org/abs/2110.06274). \ No newline at end of file +* Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners”, 2021; [arXiv:2110.06274](http://arxiv.org/abs/2110.06274).