Skip to content

Commit

Permalink
Update doc from commit d0dd1a7
Browse files Browse the repository at this point in the history
  • Loading branch information
torchxlabot2 committed Feb 6, 2024
1 parent 1ee0199 commit bd0851c
Show file tree
Hide file tree
Showing 14 changed files with 46 additions and 17 deletions.
2 changes: 1 addition & 1 deletion master/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/core/functions.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/core/xla_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/distributed/parallel_loader.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/utils/serialization.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/utils/utils.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
39 changes: 34 additions & 5 deletions master/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down Expand Up @@ -303,7 +303,8 @@
</ul>
</li>
<li><a class="reference internal" href="#performance-debugging">Performance Debugging</a></li>
<li><a class="reference internal" href="#pytorch-xla-debugging-tool">PyTorch/XLA Debugging Tool</a><ul>
<li><a class="reference internal" href="#pytorch-xla-debugging-tool">PyTorch/XLA Debugging Tool</a></li>
<li><a class="reference internal" href="#pytorch-xla-dynamo-debugging-tool">PyTorch/XLA + Dynamo Debugging Tool</a><ul>
<li><a class="reference internal" href="#perform-a-auto-metrics-analysis">Perform A Auto-Metrics Analysis</a></li>
<li><a class="reference internal" href="#compilation-execution-analysis">Compilation &amp; Execution Analysis</a></li>
</ul>
Expand Down Expand Up @@ -348,6 +349,7 @@
</ul>
</li>
<li><a class="reference internal" href="#torchdynamo-torch-compile-integration-in-pytorch-xla">TorchDynamo(torch.compile) integration in PyTorch XLA</a><ul>
<li><a class="reference internal" href="#integration">Integration</a></li>
<li><a class="reference internal" href="#inference">Inference</a></li>
<li><a class="reference internal" href="#training">Training</a></li>
<li><a class="reference internal" href="#feature-gaps">Feature gaps</a></li>
Expand Down Expand Up @@ -1905,6 +1907,10 @@ <h2>Performance Debugging<a class="headerlink" href="#performance-debugging" tit
<div class="section" id="pytorch-xla-debugging-tool">
<h2>PyTorch/XLA Debugging Tool<a class="headerlink" href="#pytorch-xla-debugging-tool" title="Permalink to this headline"></a></h2>
<p>You can enable the PyTorch/XLA debugging tool by setting <code class="docutils literal notranslate"><span class="pre">PT_XLA_DEBUG=1</span></code>, which provides a couple useful debugging features.</p>
</div>
<div class="section" id="pytorch-xla-dynamo-debugging-tool">
<h2>PyTorch/XLA + Dynamo Debugging Tool<a class="headerlink" href="#pytorch-xla-dynamo-debugging-tool" title="Permalink to this headline"></a></h2>
<p>You can enable the PyTorch/XLA + Dynamo debugging tool by setting <code class="docutils literal notranslate"><span class="pre">XLA_DYNAMO_DEBUG=1</span></code>.</p>
<div class="section" id="perform-a-auto-metrics-analysis">
<h3>Perform A Auto-Metrics Analysis<a class="headerlink" href="#perform-a-auto-metrics-analysis" title="Permalink to this headline"></a></h3>
<p>The debugging tool will analyze the metrics report and provide a summary. Some example output would be</p>
Expand Down Expand Up @@ -2637,8 +2643,29 @@ <h3>New TPU runtime<a class="headerlink" href="#new-tpu-runtime" title="Permalin
</div>
<div class="section" id="torchdynamo-torch-compile-integration-in-pytorch-xla">
<h2>TorchDynamo(torch.compile) integration in PyTorch XLA<a class="headerlink" href="#torchdynamo-torch-compile-integration-in-pytorch-xla" title="Permalink to this headline"></a></h2>
<p><a class="reference external" href="https://pytorch.org/docs/stable/dynamo/index.html">TorchDynamo</a> is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in and its biggest feature is to dynamically modify Python bytecode right before it is executed. In the pytorch/xla 2.0 release, PyTorch/XLA provided an experimental backend for the TorchDynamo for both inference and training.</p>
<p><a class="reference external" href="https://pytorch.org/docs/stable/torch.compiler.html">TorchDynamo</a> is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in and its biggest feature is to dynamically modify Python bytecode right before it is executed. In the pytorch/xla 2.0 release, PyTorch/XLA provided an experimental backend for the TorchDynamo for both inference and training.</p>
<p>The way that XLA bridge works is that Dynamo will provide a TorchFX graph when it recognizes a model pattern and PyTorch/XLA will use existing Lazy Tensor technology to compile the FX graph and return the compiled function.</p>
<div class="section" id="integration">
<h3>Integration<a class="headerlink" href="#integration" title="Permalink to this headline"></a></h3>
<p>Support for PyTorch/XLA and Dynamo currently exists by adding the <code class="docutils literal notranslate"><span class="pre">backend='openxla'</span></code> argument to <code class="docutils literal notranslate"><span class="pre">torch.compile</span></code>. For example:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_xla.core.xla_model</span> <span class="k">as</span> <span class="nn">xm</span>

<span class="k">def</span> <span class="nf">add</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">):</span>
<span class="n">a_xla</span> <span class="o">=</span> <span class="n">a</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xm</span><span class="o">.</span><span class="n">xla_device</span><span class="p">())</span>
<span class="n">b_xla</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">xm</span><span class="o">.</span><span class="n">xla_device</span><span class="p">())</span>
<span class="k">return</span> <span class="n">a_xla</span> <span class="o">+</span> <span class="n">b_xla</span>

<span class="n">compiled_code</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">add</span><span class="p">,</span> <span class="n">backend</span><span class="o">=</span><span class="s1">&#39;openxla&#39;</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">compiled_code</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">10</span><span class="p">),</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">10</span><span class="p">)))</span>
</pre></div>
</div>
<p>Currently there are two different backends, that eventually will be merged into a single ‘openxla’ backend:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">backend='openxla'</span></code> - Useful for training.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">backend='openxla_eval'</span></code> - Useful for inference.</p></li>
</ul>
</div>
<div class="section" id="inference">
<h3>Inference<a class="headerlink" href="#inference" title="Permalink to this headline"></a></h3>
<p>Here is a small code example of running resnet18 with <code class="docutils literal notranslate"><span class="pre">torch.compile</span></code></p>
Expand Down Expand Up @@ -2671,7 +2698,7 @@ <h3>Inference<a class="headerlink" href="#inference" title="Permalink to this he
geomean | 3.04</p>
<p>Note</p>
<ol class="arabic simple">
<li><p>User will likely see better inference perfomrance by putting the inference execution in a <code class="docutils literal notranslate"><span class="pre">torch.no_grad</span></code> context. <code class="docutils literal notranslate"><span class="pre">openxla</span></code> is a <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> backend of <code class="docutils literal notranslate"><span class="pre">torch.compile</span></code>. <code class="docutils literal notranslate"><span class="pre">Aot-autograd</span></code> will attempt to save some states for potential backward. <code class="docutils literal notranslate"><span class="pre">torch.no_grad</span></code> will help <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> understand that it is being executed in a inference context.</p></li>
<li><p>User will likely see better inference performance by putting the inference execution in a <code class="docutils literal notranslate"><span class="pre">torch.no_grad</span></code> context. <code class="docutils literal notranslate"><span class="pre">openxla</span></code> is an <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> backend of <code class="docutils literal notranslate"><span class="pre">torch.compile</span></code>; <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> attempts to save some state for a potential backward pass. Setting <code class="docutils literal notranslate"><span class="pre">torch.no_grad</span></code> helps <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> understand that it is being executed in an inference context.</p></li>
<li><p>User can also use the <code class="docutils literal notranslate"><span class="pre">openxla_eval</span></code> backend directly without <code class="docutils literal notranslate"><span class="pre">torch.no_grad</span></code>, since <code class="docutils literal notranslate"><span class="pre">openxla_eval</span></code> is not an <code class="docutils literal notranslate"><span class="pre">aot-autograd</span></code> backend and only works for inference.</p></li>
</ol>
</div>
Expand Down Expand Up @@ -3792,7 +3819,8 @@ <h2>HuggingFace Llama 2 Example<a class="headerlink" href="#huggingface-llama-2-
</ul>
</li>
<li><a class="reference internal" href="#performance-debugging">Performance Debugging</a></li>
<li><a class="reference internal" href="#pytorch-xla-debugging-tool">PyTorch/XLA Debugging Tool</a><ul>
<li><a class="reference internal" href="#pytorch-xla-debugging-tool">PyTorch/XLA Debugging Tool</a></li>
<li><a class="reference internal" href="#pytorch-xla-dynamo-debugging-tool">PyTorch/XLA + Dynamo Debugging Tool</a><ul>
<li><a class="reference internal" href="#perform-a-auto-metrics-analysis">Perform A Auto-Metrics Analysis</a></li>
<li><a class="reference internal" href="#compilation-execution-analysis">Compilation &amp; Execution Analysis</a></li>
</ul>
Expand Down Expand Up @@ -3837,6 +3865,7 @@ <h2>HuggingFace Llama 2 Example<a class="headerlink" href="#huggingface-llama-2-
</ul>
</li>
<li><a class="reference internal" href="#torchdynamo-torch-compile-integration-in-pytorch-xla">TorchDynamo(torch.compile) integration in PyTorch XLA</a><ul>
<li><a class="reference internal" href="#integration">Integration</a></li>
<li><a class="reference internal" href="#inference">Inference</a></li>
<li><a class="reference internal" href="#training">Training</a></li>
<li><a class="reference internal" href="#feature-gaps">Feature gaps</a></li>
Expand Down
2 changes: 1 addition & 1 deletion master/notes/source_of_recompilation.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
Binary file modified master/objects.inv
Binary file not shown.
2 changes: 1 addition & 1 deletion master/py-modindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/search.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+git14b1ee7 )
master (2.2.0+gitd0dd1a7 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/searchindex.js

Large diffs are not rendered by default.

0 comments on commit bd0851c

Please sign in to comment.