zentorch is compatible with base versions of PyTorch v2.0 or later. This release provides zentorch for PyTorch v2.4.0.
This release of the plug-in supports:
- Datatypes FP32, BF16, and INT4 (WOQ)
- Introduction of a new zentorch.llm.optimize() method for Hugging Face Generative LLM models
- New zentorch.load_woq_model() method to support loading of Weight Only Quantized models generated through the AMD Quark tool. This method only supports models quantized and exported with per-channel quantization using the AWQ algorithm.
- Improved graph optimizations, enhanced SDPA (Scalar Dot Product Attention) operator and more.
- Automatic Mixed Precision (AMP) between FP32 and BF16 providing a performance improvement with minimal changes in accuracy