Releases: intel/intel-extension-for-pytorch
Intel® Extension for PyTorch* v1.10.200+gpu Release Notes
Intel® Extension for PyTorch* v1.10.200+gpu extends PyTorch* 1.10 with up-to-date features and optimizations on XPU for an extra performance boost on Intel Graphics cards. XPU is a user visible device that is a counterpart of the well-known CPU and CUDA in the PyTorch* community. XPU represents an Intel-specific kernel and graph optimizations for various “concrete” devices. The XPU runtime will choose the actual device when executing AI workloads on the XPU device. The default selected device is Intel GPU. XPU kernels from Intel® Extension for PyTorch* are written in DPC++ that supports SYCL language and also a number of DPC++ extensions.
Highlights
This release introduces specific XPU solution optimizations on Intel® Data Center GPU Flex Series 170. Optimized operators and kernels are implemented and registered through PyTorch* dispatching mechanism for the XPU device. These operators and kernels are accelerated on Intel GPU hardware from the corresponding native vectorization and matrix calculation features. In graph mode, additional operator fusions are supported to reduce operator/kernel invocation overheads, and thus increase performance.
This release provides the following features:
- Auto Mixed Precision (AMP)
- support of AMP with BFloat16 and Float16 optimization of GPU operators
- Channels Last
- support of channels_last (NHWC) memory format for most key GPU operators
- DPC++ Extension
- mechanism to create PyTorch* operators with custom DPC++ kernels running on the XPU device
- Optimized Fusion
- support of SGD/AdamW fusion for both FP32 and BF16 precision
This release supports the following fusion patterns in PyTorch* JIT mode:
- Conv2D + ReLU
- Conv2D + Sum
- Conv2D + Sum + ReLU
- Pad + Conv2d
- Conv2D + SiLu
- Permute + Contiguous
- Conv3D + ReLU
- Conv3D + Sum
- Conv3D + Sum + ReLU
- Linear + ReLU
- Linear + Sigmoid
- Linear + Div(scalar)
- Linear + GeLu
- Linear + GeLu_
- T + Addmm
- T + Addmm + ReLu
- T + Addmm + Sigmoid
- T + Addmm + Dropout
- T + Matmul
- T + Matmul + Add
- T + Matmul + Add + GeLu
- T + Matmul + Add + Dropout
- Transpose + Matmul
- Transpose + Matmul + Div
- Transpose + Matmul + Div + Add
- MatMul + Add
- MatMul + Div
- Dequantize + PixelShuffle
- Dequantize + PixelShuffle + Quantize
- Mul + Add
- Add + ReLU
- Conv2D + Leaky_relu
- Conv2D + Leaky_relu_
- Conv2D + Sigmoid
- Conv2D + Dequantize
- Softplus + Tanh
- Softplus + Tanh + Mul
- Conv2D + Dequantize + Softplus + Tanh + Mul
- Conv2D + Dequantize + Softplus + Tanh + Mul + Quantize
- Conv2D + Dequantize + Softplus + Tanh + Mul + Quantize + Add
Known Issues
-
[CRITICAL ERROR] Kernel 'XXX' removed due to usage of FP64 instructions unsupported by the targeted hardware
FP64 is not natively supported by the Intel® Data Center GPU Flex Series platform. If you run any AI workload on that platform and receive this error message, it means a kernel requiring FP64 instructions is removed and not executed, hence the accuracy of the whole workload is wrong.
-
symbol undefined caused by
_GLIBCXX_USE_CXX11_ABI
ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
DPC++ does not support
_GLIBCXX_USE_CXX11_ABI=0
, Intel® Extension for PyTorch* is always compiled with_GLIBCXX_USE_CXX11_ABI=1
. This symbol undefined issue appears when PyTorch* is compiled with_GLIBCXX_USE_CXX11_ABI=0
. Update PyTorch* CMAKE file to set_GLIBCXX_USE_CXX11_ABI=1
and compile PyTorch* with particular compiler which supports_GLIBCXX_USE_CXX11_ABI
. We recommend using gcc version 9.4.0 on ubuntu 20.04. -
Can't find oneMKL library when build Intel® Extension for PyTorch* without oneMKL
/usr/bin/ld: cannot find -lmkl_sycl /usr/bin/ld: cannot find -lmkl_intel_ilp64 /usr/bin/ld: cannot find -lmkl_core /usr/bin/ld: cannot find -lmkl_tbb_thread dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
When PyTorch* is built with oneMKL library and Intel® Extension for PyTorch* is built without oneMKL library, this linker issue may occur. Resolve it by setting:
export USE_ONEMKL=OFF export MKL_DPCPP_ROOT=${PATH_To_Your_oneMKL}/__release_lnx/mkl
Then clean build Intel® Extension for PyTorch*.
-
undefined symbol: mkl_lapack_dspevd. Intel MKL FATAL ERROR: cannot load libmkl_vml_avx512.so.2 or libmkl_vml_def.so.2
This issue may occur when Intel® Extension for PyTorch* is built with oneMKL library and PyTorch* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly and trigger this issue. Resolve it by installing MKL library from conda:
conda install mkl conda install mkl-include
then clean build PyTorch*.
-
OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such file or directory
Wrong MKL library is used when multiple MKL libraries exist in system. Preload oneMKL by:
export LD_PRELOAD=${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_lp64.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_ilp64.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_sequential.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_core.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_sycl.so.1
If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL_DPCPP_ROOT}/lib/intel64/ by
LD_PRELOAD
. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
Intel® Extension for PyTorch* v1.12.300-cpu Release Notes
Highlights
- Optimize BF16 MHA fusion to avoid transpose overhead to boost BERT-* BF16 performance #992
- Remove 64bytes alignment constraint for FP32 and BF16 AddLayerNorm fusion #992
- Fix INT8 RetinaNet accuracy issue #1032
- Fix
Cat.out
issue that does not update theout
tensor (#1053) #1074
Full Changelog: v1.12.100...v1.12.300
Intel® Extension for PyTorch* v1.12.100-cpu Release Notes
This is a patch release to fix the AVX2 issue that blocks running on non-AVX512 platforms.
Intel® Extension for PyTorch* v1.12.0-cpu Release Notes
We are excited to bring you the release of Intel® Extension for PyTorch* 1.12.0-cpu, by tightly following PyTorch 1.12 release. In this release, we matured the automatic int8 quantization and made it a stable feature. We stabilized runtime extension and brought about a MultiStreamModule feature to further boost throughput in offline inference scenario. We also brought about various enhancements in operation and graph which are positive for the performance of broad set of workloads.
- Automatic INT8 quantization became a stable feature baking into a well-tuned default quantization recipe, supporting both static and dynamic quantization and a wide range of calibration algorithms.
- Runtime Extension, featured MultiStreamModule, became a stable feature, could further enhance throughput in offline inference scenario.
- More optimizations in graph and operations to improve performance of broad set of models, examples include but not limited to wave2vec, T5, Albert etc.
- Pre-built experimental binary with oneDNN Graph Compiler tuned on would deliver additional performance gain for Bert, Albert, Roberta in INT8 inference.
Highlights
- Matured automatic INT8 quantization feature baking into a well-tuned default quantization recipe. We facilitated the user experience and provided a wide range of calibration algorithms like Histogram, MinMax, MovingAverageMinMax, etc. Meanwhile, We polished the static quantization with better flexibility and enabled dynamic quantization as well. Compared to the previous version, the brief changes are as follows. Refer to tutorial page for more details.
v1.11.0-cpu | v1.12.0-cpu |
import intel_extension_for_pytorch as ipex
# Calibrate the model
qconfig = ipex.quantization.QuantConf(qscheme=torch.per_tensor_affine)
for data in calibration_data_set:
with ipex.quantization.calibrate(qconfig):
model_to_be_calibrated(x)
qconfig.save('qconfig.json')
# Convert the model to jit model
conf = ipex.quantization.QuantConf('qconfig.json')
with torch.no_grad():
traced_model = ipex.quantization.convert(model, conf, example_input)
# Do inference
y = traced_model(x) |
import intel_extension_for_pytorch as ipex
# Calibrate the model
qconfig = ipex.quantization.default_static_qconfig # Histogram calibration algorithm and
calibrated_model = ipex.quantization.prepare(model_to_be_calibrated, qconfig, example_inputs=example_inputs)
for data in calibration_data_set:
calibrated_model(data)
# Convert the model to jit model
quantized_model = ipex.quantization.convert(calibrated_model)
with torch.no_grad():
traced_model = torch.jit.trace(quantized_model, example_input)
traced_model = torch.jit.freeze(traced_model)
# Do inference
y = traced_model(x) |
- Runtime Extension, featured MultiStreamModule, became a stable feature. In this release, we enhanced the heuristic rule to further enhance throughput in offline inference scenario. Meanwhile, we also provide the
ipex.cpu.runtime.MultiStreamModuleHint
to custom how to split the input into streams and concat the output for each steam.
v1.11.0-cpu | v1.12.0-cpu |
import intel_extension_for_pytorch as ipex
# Create CPU pool
cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)
# Create multi-stream model
multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool) |
import intel_extension_for_pytorch as ipex
# Create CPU pool
cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)
# Optional
multi_stream_input_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)
multi_stream_output_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)
# Create multi-stream model
multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool,
multi_stream_input_hint, # optional
multi_stream_output_hint ) # optional |
- Polished the
ipex.optimize
to accept the input shape information which would conclude the optimal memory layout for better kernel efficiency.
v1.11.0-cpu | v1.12.0-cpu |
import intel_extension_for_pytorch as ipex
model = ...
model.load_state_dict(torch.load(PATH))
model.eval()
optimized_model = ipex.optimize(model, dtype=torch.bfloat16) |
import intel_extension_for_pytorch as ipex
model = ...
model.load_state_dict(torch.load(PATH))
model.eval()
optimized_model = ipex.optimize(model, dtype=torch.bfloat16, sample_input=input) |
-
Provided a pre-built experimental binary with oneDNN Graph Compiler turned on, which would deliver additional performance gain for Bert, Albert, and Roberta in INT8 inference.
-
Provided more optimizations in graph and operations
- Fuse Adam to improve training performance #822
- Enable Normalization operators to support channels-last 3D #642
- Support Deconv3D to serve most models and implement most fusions like Conv
- Enable LSTM to support static and dynamic quantization #692
- Enable Linear to support dynamic quantization #787
- Fusions.
- Fuse
Add
+Swish
to accelerate FSI Riskful model #551 - Fuse
Conv
+LeakyReLU
#589 - Fuse
BMM
+Add
#407 - Fuse
Concat
+BN
+ReLU
#647 - Optimize
Convolution1D
to support channels last memory layout and fuseGeLU
as its post operation. #657 - Fuse
Einsum
+Add
to boost Alphafold2 #674 - Fuse
Linear
+Tanh
#711
- Fuse
Known Issues
-
RuntimeError: Overflow when unpacking long
when a tensor's min max value exceeds int range while performing int8 calibration. Please customize QConfig to use min-max calibration method. -
Calibrating with quantize_per_tensor, when benchmarking with 1 OpenMP* thread, results might be incorrect with large tensors (find more detailed info here. Editing your code following the pseudocode below can workaround this issue, if you do need to explicitly set OMP_NUM_THREAEDS=1 for benchmarking. However, there could be a performance regression if oneDNN graph compiler prototype feature is utilized.
Workaround pseudocode:
# perform convert/trace/freeze with omp_num_threads > 1(N) torch.set_num_threads(N) prepared_model = prepare(model, input) converted_model = convert(prepared_model) traced_model = torch.jit.trace(converted_model, input) freezed_model = torch.jit.freeze(traced_model) # run freezed model to apply optimization pass freezed_model(input) # benchmarking with omp_num_threads = 1 torch.set_num_threads(1) run_benchmark(freezed_model, input)
-
Low performance with INT8 support for dynamic shapes
The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still work in progress. When the input shapes are dynamic, for example inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, use stock PyTorch INT8 functionality.
Note: Using Runtime Extension feature if batch size cannot be divided by number of streams, because mini batch size on each stream are not equivalent, scripts run into this issues. -
BF16 AMP(auto-mixed-precision) runs abnormally with the extension on the AVX2-only machine if the topology contains
Conv
,Matmul
,Linear
, andBatchNormalization
-
Runtime extension of MultiStreamModule doesn't support DLRM inference, since the input of DLRM (EmbeddingBag specifically) can't be simplely batch split.
-
Runtime extension of MultiStreamModule has poor performance of RNNT Inference comparing with native throughput mode. Only part of the RNNT models (joint_net specifically) can be jit traced into graph. However, in one batch inference,
joint_net
is invoked multi times. It increases the overhead of MultiStreamModule as input batch split, thread synchronization and output concat. -
Incorrect Conv and Linear result if the number of OMP threads is changed at runtime
The oneDNN memory layout depends on the number of OMP threads, which requires the caller to detect the changes for the # of OMP threads while this release has not implemented it yet. -
Low throughput with DLRM FP32 Tra...
Intel® Extension for PyTorch* v1.11.200-cpu Release Notes
Highlights
- Enable more fused operators to accelerate particular models.
- Fuse
Convolution
andLeakyReLU
(#648) - Support
torch.einsum
and fuse it withadd
(#684) - Fuse
Linear
andTanh
(#685)
- Fuse
- In addition to the existing installation methods, this release provides Docker installation from DockerHub.
- Provide the evaluation wheel packages that could boost performance for selective topologies on top of oneDNN graph compiler prototype feature.
NOTE: This is still at the early development stage and not fully mature yet, but feel free to reach out through GitHub tickets if you have any suggestions.
Full Changelog: v1.11.0...v1.11.200
Intel® Extension for PyTorch* v1.11.0-cpu Release Notes
We are excited to announce Intel® Extension for PyTorch* 1.11.0-cpu release by tightly following PyTorch 1.11 release. Along with extension 1.11, we focused on continually improving OOB user experience and performance. Highlights include:
- Support a single binary with runtime dynamic dispatch based on AVX2/AVX512 hardware ISA detection
- Support install binary from
pip
with package name only (without the need of specifying the URL) - Provide the C++ SDK installation to facilitate ease of C++ app development and deployment
- Add more optimizations, including graph fusions for speeding up Transformer-based models and CNN, etc
- Reduce the binary size for both the PIP wheel and C++ SDK (2X to 5X reduction from the previous version)
Highlights
-
Combine the AVX2 and AVX512 binary as a single binary and automatically dispatch to different implementations based on hardware ISA detection at runtime. The typical case is to serve the data center that mixtures AVX2-only and AVX512 platforms. It does not need to deploy the different ISA binary now compared to the previous version
NOTE: The extension uses the oneDNN library as the backend. However, the BF16 and INT8 operator sets and features are different between AVX2 and AVX512. Please refer to oneDNN document for more details.
When one input is of type u8, and the other one is of type s8, oneDNN assumes that it is the user’s responsibility to choose the quantization parameters so that no overflow/saturation occurs. For instance, a user can use u7 [0, 127] instead of u8 for the unsigned input, or s7 [-64, 63] instead of the s8 one. It is worth mentioning that this is required only when the Intel AVX2 or Intel AVX512 Instruction Set is used.
-
The extension wheel packages have been uploaded to pypi.org. The user could directly install the extension by
pip/pip3
without explicitly specifying the binary location URL.
v1.10.100-cpu | v1.11.0-cpu |
python -m pip install intel_extension_for_pytorch==1.10.100 -f https://software.intel.com/ipex-whl-stable |
pip install intel_extension_for_pytorch |
- Compared to the previous version, this release provides a dedicated installation file for the C++ SDK. The installation file automatically detects the PyTorch C++ SDK location and installs the extension C++ SDK files to the PyTorch C++ SDK. The user does not need to manually add the extension C++ SDK source files and CMake to the PyTorch SDK. In addition to that, the installation file reduces the C++ SDK binary size from ~220MB to ~13.5MB.
v1.10.100-cpu | v1.11.0-cpu |
intel-ext-pt-cpu-libtorch-shared-with-deps-1.10.0+cpu.zip (220M)
intel-ext-pt-cpu-libtorch-cxx11-abi-shared-with-deps-1.10.0+cpu.zip (224M) |
libintel-ext-pt-1.11.0+cpu.run (13.7M)
libintel-ext-pt-cxx11-abi-1.11.0+cpu.run (13.5M) |
-
Add more optimizations, including more custom operators and fusions.
- Fuse the QKV linear operators as a single Linear to accelerate the Transformer*(BERT-*) encoder part - #278.
- Remove Multi-Head-Attention fusion limitations to support the 64bytes unaligned tensor shape. #531
- Fold the binary operator to Convolution and Linear operator to reduce computation. #432 #438 #602
- Replace the outplace operators with their corresponding in-place version to reduce memory footprint. The extension currently supports the operators including
sliu
,sigmoid
,tanh
,hardsigmoid
,hardswish
,relu6
,relu
,selu
,softmax
. #524 - Fuse the Concat + BN + ReLU as a single operator. #452
- Optimize Conv3D for both imperative and JIT by enabling NHWC and pre-packing the weight. #425
-
Reduce the binary size. C++ SDK is reduced from ~220MB to ~13.5MB while the wheel packaged is reduced from ~100MB to ~40MB.
-
Update oneDNN and oneDNN graph to 2.5.2 and 0.4.2 respectively.
Known Issues
-
BF16 AMP(auto-mixed-precision) runs abnormally with the extension on the AVX2-only machine if the topology contains
Conv
,Matmul
,Linear
, andBatchNormalization
-
Runtime extension does not support the scenario that the BS is not divisible by the stream number
-
Incorrect Conv and Linear result if the number of OMP threads is changed at runtime
The oneDNN memory layout depends on the number of OMP threads, which requires the caller to detect the changes for the # of OMP threads while this release has not implemented it yet.
-
INT8 performance of EfficientNet and DenseNet with the extension is slower than that of FP32
-
Low performance with INT8 support for dynamic shapes
The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still working in progress. For the use cases where the input shapes are dynamic, for example, inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, please utilize stock PyTorch INT8 functionality.
-
Low throughput with DLRM FP32 Train
A ‘Sparse Add’ PR is pending on review. The issue will be fixed when the PR is merged.
-
If the inference is done with a custom function, conv+bn folding feature of the
ipex.optimize()
function doesn’t work.import torch import intel_pytorch_extension as ipex class Module(torch.nn.Module): def __init__(self): super(Module, self).__init__() self.conv = torch.nn.Conv2d(1, 10, 5, 1) self.bn = torch.nn.BatchNorm2d(10) self.relu = torch.nn.ReLU() def forward(self, x): x = self.conv(x) x = self.bn(x) x = self.relu(x) return x def inference(self, x): return self.forward(x) if __name__ == '__main__': m = Module() m.eval() m = ipex.optimize(m, dtype=torch.float32, level="O0") d = torch.rand(1, 1, 112, 112) with torch.no_grad(): m.inference(d)
This is PyTorch FX limitation, user can avoid this error by calling
m = ipex.optimize(m, level="O0")
, which doesn't apply the extension optimization, or disableconv+bn
folding by callingm = ipex.optimize(m, level="O1", conv_bn_folding=False)
.
What's Changed
Full Changelog: v1.10.100...v1.11.0
Intel® Extension for PyTorch* v1.10.100-cpu Release Notes
This release is meant to fix the following issues:
- Resolve the issue that the PyTorch Tensor Expression(TE) did not work after importing the extension.
- Wrap the BatchNorm(BN) as another operator to break the TE's BN-related fusions. Because the BatchNorm performance of PyTorch Tensor Expression can not achieve the same performance as PyTorch ATen BN.
- Update the documentation
- Fix the INT8 quantization example issue #205
- Polish the installation guide
Full Changelog: v1.10.0...v1.10.100
v1.10.0
Intel® Extension for PyTorch* v1.10.0-cpu Release Notes
The Intel® Extension for PyTorch* 1.10 is on top of PyTorch 1.10. In this release, we polished the front-end APIs. The APIs are more simple, stable, and straightforward now. According to the PyTorch community recommendation, we changed the underhood device from XPU
to CPU
. With this change, the model and tensor do not need to be converted to the extension device to get a performance improvement. It simplifies the model changes.
Besides that, we continuously optimize the Transformer* and CNN models by fusing more operators and applying NHWC. We measured the 1.10 performance on Torchvison and HugginFace. As expected, 1.10 can speed up the two model zones. In addition, 1.10 releases the C++ SDK to facilitate PyTorch deployment with the extension.
Highlights
- Change the package name to
intel_extension_for_pytorch
while the original package name isintel_pytorch_extension
. This change targets to avoid any potential legal issues.
v1.9.0-cpu | v1.10.0-cpu |
import intel_pytorch_extension as ipex |
import intel_extension_for_pytorch as ipex |
- The underhood device is changed from the extension-specific device(
XPU
) to the standard CPU device which aligns with PyTorch CPU device design regardless of the dispatch mechanism and operator register mechanism. The model does not need to be converted to the extension device explicitly.
v1.9.0-cpu | v1.10.0-cpu |
import torch
import torchvision.models as models
# Import the extension
import intel_extension_for_pytorch as ipex
resnet18 = models.resnet18(pretrained = True)
# Explicitly convert the model to the extension device
resnet18_xpu = resnet18.to(ipex.DEVICE) |
import torch
import torchvision.models as models
# Import the extension
import intel_extension_for_pytorch as ipex
resnet18 = models.resnet18(pretrained = True) |
- Compared to 1.9.0, 1.10.0 follows PyTorch AMP API(
torch.cpu.amp
) to support auto-mixed-precision.torch.cpu.amp
provides convenience for auto data type conversion at runtime.torch.cpu.amp
supportstorch.bfloat16
now to boost the performance on Intel CPU what has BFloat16 instructions.
import torch
class SimpleNet(torch.nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.conv = torch.nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=(1, 1), bias=False)
def forward(self, x):
return self.conv(x)
v1.9.0-cpu | v1.10.0-cpu |
# Import the extension
import intel_pytorch_extension as ipex
# Automatically mix precision
ipex.enable_auto_mixed_precision(mixed_dtype = torch.bfloat16)
model = SimpleNet().eval()
x = torch.rand(64, 64, 224, 224)
with torch.no_grad():
model = torch.jit.trace(model, x)
model = torch.jit.freeze(model)
y = model(x) |
# Import the extension
import intel_extension_for_pytorch as ipex
model = SimpleNet().eval()
x = torch.rand(64, 64, 224, 224)
with torch.cpu.amp.autocast(), torch.no_grad():
model = torch.jit.trace(model, x)
model = torch.jit.freeze(model)
y = model(x) |
- The 1.10 release provides the INT8 calibration as an experimental feature while it only supports post-training static quantization now. Compared to 1.9.0, the fronted APIs for quantization is more straightforward and ease-of-use.
import torch
import torch.nn as nn
import intel_extension_for_pytorch as ipex
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv = nn.Conv2d(10, 10, 3)
def forward(self, x):
x = self.conv(x)
return x
model = MyModel().eval()
# user dataset for calibration.
xx_c = [torch.randn(1, 10, 28, 28) for i in range(2))
# user dataset for validation.
xx_v = [torch.randn(1, 10, 28, 28) for i in range(20))
- Clibration
v1.9.0-cpu | v1.10.0-cpu |
# Import the extension
import intel_pytorch_extension as ipex
# Convert the model to the Extension device
model = Model().to(ipex.DEVICE)
# Create a configuration file to save quantization parameters.
conf = ipex.AmpConf(torch.int8)
with torch.no_grad():
for x in xx_c:
# Run the model under calibration mode to collect quantization parameters
with ipex.AutoMixPrecision(conf, running_mode='calibration'):
y = model(x.to(ipex.DEVICE))
# Save the configuration file
conf.save('configure.json') |
# Import the extension
import intel_extension_for_pytorch as ipex
conf = ipex.quantization.QuantConf(qscheme=torch.per_tensor_affine)
with torch.no_grad():
for x in xx_c:
with ipex.quantization.calibrate(conf):
y = model(x)
conf.save('configure.json') |
- Inference
v1.9.0-cpu | v1.10.0-cpu |
# Import the extension
import intel_pytorch_extension as ipex
# Convert the model to the Extension device
model = Model().to(ipex.DEVICE)
conf = ipex.AmpConf(torch.int8, 'configure.json')
with torch.no_grad():
for x in cali_dataset:
with ipex.AutoMixPrecision(conf, running_mode='inference'):
y = model(x.to(ipex.DEVICE)) |
# Import the extension
import intel_extension_for_pytorch as ipex
conf = ipex.quantization.QuantConf('configure.json')
with torch.no_grad():
trace_model = ipex.quantization.convert(model, conf, example_input)
for x in xx_v:
y = trace_model(x) |
-
This release introduces the
optimize
API at the python front end to optimize the model. The new API supports FP32 and BF16, inference, and training. -
Runtime Extension (Experimental) provides a runtime CPU pool API to bind threads to cores. It also features async tasks. Please Note: Intel® Extension for PyTorch* Runtime extension is still in the POC stage. The API is subject to change. More detailed descriptions are available in the extension documentation.
Known Issues
-
omp_set_num_threads
function failed to change OpenMP threads number of oneDNN operators if it was set before.omp_set_num_threads
function is provided in Intel® Extension for PyTorch* to change the number of threads used with OpenMP. However, it failed to change the number of OpenMP threads if it was set before.pseudo-code:
omp_set_num_threads(6) model_execution() omp_set_num_threads(4) same_model_execution_again()
Reason: oneDNN primitive descriptor stores the OMP number of threads. Current oneDNN integration caches the primitive descriptor in the extension. So if we use runtime extension with oneDNN based on top of PyTorch or the extension, the runtime extension fails to change the used OMP number of threads.
-
Low performance with INT8 support for dynamic shapes
The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still working in progress. For the use cases where the input shapes are dynamic, for example, inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, please utilize stock PyTorch INT8 functionality.
-
Low throughput with DLRM FP32 Train
A 'Sparse Add' PR is pending review. The issue will be fixed when the PR is merged.
What's Changed
Full Changelog: v1.9.0...v1.10.0
v1.9.0
Intel Extension For PyTorch 1.9.0 Release Notes
What's New
New PyTorch 1.9.0 was newly supported by the Intel extension for Pytorch 1.9.0.
- Rebased the Intel Extension for Pytorch from PyTorch-1.8.0 to the official PyTorch-1.9.0 release.
- Support binary installation.
Wheel files available for Python versions
python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable
IPEX Version Python 3.6 Python 3.7 Python 3.8 Python 3.9 1.9.0 ✔️ ✔️ ✔️ ✔️ 1.8.0 ✔️ - Support the C++ library. The third party App can link the Intel-Extension-for-PyTorch C++ library to enable the particular optimizations.
v1.8.0
Intel Extension For PyTorch 1.8.0 Release Notes
What's New
New PyTorch 1.8.0 was newly supported by the Intel extension for Pytorch 1.8.0.
- Rebased the Intel Extension for Pytorch from Pytorch -1.7.0 to the official Pytorch-1.8.0 release. The new XPU device type has been added into Pytorch-1.8.0(49786), don’t need to patch PyTorch to enable Intel Extension for Pytorch anymore
- Upgraded the oneDNN from v1.5-rc to v1.8.1
- Updated the README file to add the sections to introduce supported customized operators, supported fusion patterns, tutorials and joint blogs with stakeholders