Skip to content

Commit

Permalink
typo correction, add meta, add isa requirements
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored and EikanWang committed Aug 30, 2022
1 parent 4381f91 commit 275feac
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 1 deletion.
4 changes: 4 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. meta::
:description: This website introduces Intel® Extension for PyTorch*
:keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch*

Welcome to Intel® Extension for PyTorch* Documentation
######################################################

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/features/int8.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Intel® Extension for PyTorch\* optimizations for quantization

The quantization functionality in Intel® Extension for PyTorch\* currently only supports post-training quantization. This tutorial introduces how the quantization works in the Intel® Extension for PyTorch\* side.

We fully utilize Pytorch quantization components as much as possible, such as PyTorch [Observer method](https://pytorch.org/docs/1.11/quantization-support.html#torch-quantization-observer). To make a PyTorch user be able to easily use the quantization API, API for quantization in Intel® Extension for PyTorch\* is very similar to those in PyTorch. Intel® Extension for PyTorch\* quantization supports a default recipe to automatically decide which operators should be quanized or not. This brings a satisfying performance and accuracy tradeoff.
We fully utilize Pytorch quantization components as much as possible, such as PyTorch [Observer method](https://pytorch.org/docs/1.11/quantization-support.html#torch-quantization-observer). To make a PyTorch user be able to easily use the quantization API, API for quantization in Intel® Extension for PyTorch\* is very similar to those in PyTorch. Intel® Extension for PyTorch\* quantization supports a default recipe to automatically decide which operators should be quantized or not. This brings a satisfying performance and accuracy tradeoff.

## Static Quantization

Expand Down
2 changes: 2 additions & 0 deletions docs/tutorials/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Installation Guide
|Operating System|CentOS 7, RHEL 8, Rocky Linux 8.5, Ubuntu newer than 18.04|
|Python|See prebuilt wheel files availability matrix below|

* Intel® Extension for PyTorch\* is functional on systems with AVX2 instruction set support (such as Intel® Core™ Processor Family and Intel® Xeon® Processor formerly Broadwell). However, it is highly recommended to run on systems with AVX-512 and above instructions support for optimal performance (such as Intel® Xeon® Scalable Processors).

## Install PyTorch

Make sure PyTorch is installed so that the extension will work properly. For each PyTorch release, we have a corresponding release of the extension. Here are the PyTorch versions that we support and the mapping relationship:
Expand Down

0 comments on commit 275feac

Please sign in to comment.