From 275feac8b5e750881685d45c8d2f2620f7281571 Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Tue, 30 Aug 2022 19:20:11 +0900 Subject: [PATCH] typo correction, add meta, add isa requirements --- docs/index.rst | 4 ++++ docs/tutorials/features/int8.md | 2 +- docs/tutorials/installation.md | 2 ++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/index.rst b/docs/index.rst index 257fed167..ec47d20bf 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -3,6 +3,10 @@ You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. +.. meta:: + :description: This website introduces Intel® Extension for PyTorch* + :keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch* + Welcome to Intel® Extension for PyTorch* Documentation ###################################################### diff --git a/docs/tutorials/features/int8.md b/docs/tutorials/features/int8.md index 0ca36ed56..3c4b3df22 100644 --- a/docs/tutorials/features/int8.md +++ b/docs/tutorials/features/int8.md @@ -3,7 +3,7 @@ Intel® Extension for PyTorch\* optimizations for quantization The quantization functionality in Intel® Extension for PyTorch\* currently only supports post-training quantization. This tutorial introduces how the quantization works in the Intel® Extension for PyTorch\* side. -We fully utilize Pytorch quantization components as much as possible, such as PyTorch [Observer method](https://pytorch.org/docs/1.11/quantization-support.html#torch-quantization-observer). To make a PyTorch user be able to easily use the quantization API, API for quantization in Intel® Extension for PyTorch\* is very similar to those in PyTorch. Intel® Extension for PyTorch\* quantization supports a default recipe to automatically decide which operators should be quanized or not. This brings a satisfying performance and accuracy tradeoff. +We fully utilize Pytorch quantization components as much as possible, such as PyTorch [Observer method](https://pytorch.org/docs/1.11/quantization-support.html#torch-quantization-observer). To make a PyTorch user be able to easily use the quantization API, API for quantization in Intel® Extension for PyTorch\* is very similar to those in PyTorch. Intel® Extension for PyTorch\* quantization supports a default recipe to automatically decide which operators should be quantized or not. This brings a satisfying performance and accuracy tradeoff. ## Static Quantization diff --git a/docs/tutorials/installation.md b/docs/tutorials/installation.md index 0e7b40349..57f8db32b 100644 --- a/docs/tutorials/installation.md +++ b/docs/tutorials/installation.md @@ -9,6 +9,8 @@ Installation Guide |Operating System|CentOS 7, RHEL 8, Rocky Linux 8.5, Ubuntu newer than 18.04| |Python|See prebuilt wheel files availability matrix below| +* Intel® Extension for PyTorch\* is functional on systems with AVX2 instruction set support (such as Intel® Core™ Processor Family and Intel® Xeon® Processor formerly Broadwell). However, it is highly recommended to run on systems with AVX-512 and above instructions support for optimal performance (such as Intel® Xeon® Scalable Processors). + ## Install PyTorch Make sure PyTorch is installed so that the extension will work properly. For each PyTorch release, we have a corresponding release of the extension. Here are the PyTorch versions that we support and the mapping relationship: