Skip to content

Releases: intel/intel-extension-for-pytorch

v1.2.0

25 Feb 14:25
Compare
Choose a tag to compare

Intel Extension For PyTorch 1.2.0 Release Notes

What's New

New pytorch 1.7.0 was newly supported by Intel extension for Pytorch.

  • We rebased the Intel Extension for pytorch from Pytorch -1.5rc3 to the official Pytorch-1.7.0 release. It will have performance improvement with the new Pytorch-1.7 support.
  • Device name was changed from DPCPP to XPU.
    We changed the device name from DPCPP to XPU to align with the future Intel GPU product for heterogeneous computation.
  • Enabled the launcher for end users.
    We enabled the launch script which helps users launch the program for training and inference, then automatically setup the strategy for multi-thread, multi-instance, and memory allocator. Please refer to the launch script comments for more details.

Performance Improvement

  • This upgrade provides better INT8 optimization with refined auto mixed-precision API.
  • More operators are optimized for the int8 inference and bfp16 training of some key workloads, like MaskRCNN, SSD-ResNet34, DLRM, RNNT.

Others

  • Bug fixes
    • This upgrade fixes the issue that saving the model trained by Intel extension for PyTorch caused errors.
    • This upgrade fixes the issue that Intel extension for PyTorch was slower than pytorch proper for Tacotron2.
  • New custom operators
    This upgrade adds several custom operators: ROIAlign, RNN, FrozenBatchNorm, nms.
  • Optimized operators/fusion
    This upgrade optimizes several operators: tanh, log_softmax, upsample, embeddingbad and enables int8 linear fusion.
  • Performance
    The release has daily automated testing for the supported models: ResNet50, ResNext101, Huggingface Bert, DLRM, Resnext3d, MaskRNN, SSD-ResNet34. With the extension imported, it can bring up to 2x INT8 over FP32 inference performance improvements on the 3rd Gen Intel Xeon scalable processors (formerly codename Cooper Lake).

Known issues

Multi-node training still encounter hang issues after several iterations. The fix will be included in the next official release.

v1.1.0

12 Nov 07:24
Compare
Choose a tag to compare

What's New

  • Added optimization for training with FP32 data type & BF16 data type. All the optimized FP32/BF16 backward operators include:

    • Conv2d
    • Relu
    • Gelu
    • Linear
    • Pooling
    • BatchNorm
    • LayerNorm
    • Cat
    • Softmax
    • Sigmoid
    • Split
    • Embedding_bag
    • Interaction
    • MLP
  • More fusion patterns are supported and validated in the release, see table:

    Fusion Patterns Release
    Conv + Sum v1.0
    Conv + BN v1.0
    Conv + Relu v1.0
    Linear + Relu v1.0
    Conv + Eltwise v1.1
    Linear + Gelu v1.1
  • Add docker support

  • [Alpha] Multi-node training with oneCCL support.

  • [Alpha] INT8 inference optimization.

Performance

  • The release has daily automated testing for the supported models: ResNet50, ResNext101, Huggingface Bert, DLRM, Resnext3d, Transformer. With the extension imported, it can bring up to 1.2x~1.7x BF16 over FP32 training performance improvements on the 3rd Gen Intel Xeon scalable processors (formerly codename Cooper Lake).

Known issue

  • Some workloads may crash after several iterations on the extension with jemalloc enabled.

v1.0.2

10 Aug 03:41
7d595e5
Compare
Choose a tag to compare
  • Rebase torch CCL patch to PyTorch 1.5.0-rc3

v1.0.1-alpha Release

27 Jul 05:04
Compare
Choose a tag to compare
  • Static link oneDNN library
  • Check AVX512 build option
  • Fix the issue that cannot normally invoke enable_auto_optimization

v1.0.0-alpha Release

03 Jul 12:27
540c9c5
Compare
Choose a tag to compare

What's New

  • Auto Operator Optimization

    Intel Extension for PyTorch will automatically optimize the operators of PyTorch when importing its python package. It will significantly improve the computation performance if the input tensor and the model is converted to the extension device.
  • Auto Mixed Precision

    Currently, the extension has supported bfloat16. It streamlines the work to enable a bfloat16 model. The feature is controlled by enable_auto_mix_precision. If you enable it, the extension will run the operator with bfloat16 automatically to accelerate the operator computation.

Performance Result

We collected the performance data of some models on the Intel Cooper Lake platform with 1 socket and 28 cores. Intel Cooper Lake introduced AVX512 BF16 instructions which could improve the bfloat16 computation significantly. The detail is as follows (The data is the speedup ratio and the baseline is upstream PyTorch).

Imperative - Operator Injection Imperative - Mixed Precision JIT- Operator Injection JIT - Mixed Precision
RN50 2.68 5.01 5.14 9.66
ResNet3D 3.00 4.67 5.19 8.39
BERT-LARGE 0.99 1.40 N/A N/A

We also measured the performance of ResNeXt101, Transformer-FB, DLRM, and YOLOv3 with the extension. We observed that the performance could be significantly improved by the extension as expected.

Known Issues

#10 All data types have not been registered for DPCPP
#37 MaxPool can't get nan result when input's value is nan

NOTE
The extension supported PyTorch v1.5.0-rc3. Support for other PyTorch versions is working in progress.