Intel® Extension for TensorFlow* is a heterogeneous, high performance deep learning extension plugin based on TensorFlow PluggableDevice interface to bring Intel XPU(GPU, CPU, etc) devices into TensorFlow open source community for AI workload acceleration. It allows flexibly plugging an XPU into TensorFlow on-demand, and exposing computing power inside Intel's hardware.
This diagram provides a summary of the TensorFlow* PyPI package ecosystem.
-
TensorFlow PyPI packages: estimator, keras, tensorboard, tensorflow-base
-
Intel® Extension for TensorFlow* package:
intel_extension_for_tensorflow
contains:- XPU specific implementation
- kernels & operators
- graph optimizer
- device runtime
- XPU configuration management
- XPU backend selection
- Options turning on/off advanced features
- XPU specific implementation
Intel® Extension for TensorFlow* provides Intel GPU support and experimental Intel CPU support.
Package | CPU | GPU | Installation |
---|---|---|---|
Intel GPU driver | Y | Install Intel GPU driver | |
Intel® oneAPI Base Toolkit | Y | Install Intel® oneAPI Base Toolkit | |
TensorFlow | Y | Y | Install TensorFlow 2.10.0 |
Intel® Extension for TensorFlow* can be installed from the following channels:
PyPI | DockerHub | Source |
---|---|---|
GPU \ CPU | GPU Container \ CPU Container | Build from source |
pip install tensorflow==2.10.0
pip install --upgrade intel-extension-for-tensorflow[gpu]
Please refer to GPU installation for details.
pip install tensorflow==2.10.0
pip install --upgrade intel-extension-for-tensorflow[cpu]
Sanity check by:
python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Please submit your questions, feature requests, and bug reports on the GitHub issues page.