Skip to content

Production First and Production Ready End-to-End Speech Recognition Toolkit

License

Notifications You must be signed in to change notification settings

nagasanthoshp/wenet

 
 

Repository files navigation

WeNet

中文版

License Python-Version

Discussions | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models

We share neural Net together.

The main motivation of WeNet is to close the gap between research and production end-to-end (E2E) speech recognition models, to reduce the effort of productionizing E2E models, and to explore better E2E models for production.

Highlights

  • Production first and production ready: The core design principle of WeNet. WeNet provides full stack solutions for speech recognition.

    • Unified solution for streaming and non-streaming ASR: U2 framework--develop, train, and deploy only once.
    • Runtime solution: built-in server x86 and on-device android runtime solution.
    • Model exporting solution: built-in solution to export model to LibTorch/ONNX for inference.
    • LM solution: built-in production-level LM solution.
    • Other production solutions: built-in contextual biasing, time stamp, endpoint, and n-best solutions.
  • Accurate: WeNet achieves SOTA results on a lot of public speech datasets.

  • Light weight: WeNet is easy to install, easy to use, well designed, and well documented.

Performance Benchmark

Please see examples/$dataset/s0/README.md for benchmark on different speech datasets.

Installation

  • Clone the repo
git clone https://github.com/wenet-e2e/wenet.git
conda create -n wenet python=3.8
conda activate wenet
pip install -r requirements.txt
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
  • Optionally, if you want to use x86 runtime or language model(LM), you have to build the runtime as follows. Otherwise, you can just ignore this step.
# runtime build requires cmake 3.14 or above
cd runtime/server/x86
mkdir build && cd build && cmake .. && cmake --build .

Discussion & Communication

Please visit Discussions for further discussion.

For Chinese users, you can aslo scan the QR code on the left to follow our offical account of WeNet. We created a WeChat group for better discussion and quicker response. Please scan the personal QR code on the right, and the guy is responsible for inviting you to the chat group.

If you can not access the QR image, please access it on gitee.

Or you can directly discuss on Github Issues.

Contributors

Acknowledge

  1. We borrowed a lot of code from ESPnet for transformer based modeling.
  2. We borrowed a lot of code from Kaldi for WFST based decoding for LM integration.
  3. We referred EESEN for building TLG based graph for LM integration.
  4. We referred to OpenTransformer for python batch inference of e2e models.

Citations

@inproceedings{yao2021wenet,
  title={WeNet: Production oriented Streaming and Non-streaming End-to-End Speech Recognition Toolkit},
  author={Yao, Zhuoyuan and Wu, Di and Wang, Xiong and Zhang, Binbin and Yu, Fan and Yang, Chao and Peng, Zhendong and Chen, Xiaoyu and Xie, Lei and Lei, Xin},
  booktitle={Proc. Interspeech},
  year={2021},
  address={Brno, Czech Republic }
  organization={IEEE}
}

@article{zhang2020unified,
  title={Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition},
  author={Zhang, Binbin and Wu, Di and Yao, Zhuoyuan and Wang, Xiong and Yu, Fan and Yang, Chao and Guo, Liyong and Hu, Yaguang and Xie, Lei and Lei, Xin},
  journal={arXiv preprint arXiv:2012.05481},
  year={2020}
}

@article{wu2021u2++,
  title={U2++: Unified Two-pass Bidirectional End-to-end Model for Speech Recognition},
  author={Wu, Di and Zhang, Binbin and Yang, Chao and Peng, Zhendong and Xia, Wenjing and Chen, Xiaoyu and Lei, Xin},
  journal={arXiv preprint arXiv:2106.05642},
  year={2021}
}

About

Production First and Production Ready End-to-End Speech Recognition Toolkit

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 66.7%
  • Python 22.9%
  • Shell 3.7%
  • Perl 3.7%
  • Java 0.8%
  • CMake 0.8%
  • Other 1.4%