Skip to content

Commit

Permalink
Update doc from commit 050a240
Browse files Browse the repository at this point in the history
  • Loading branch information
torchxlabot2 committed Jan 10, 2024
1 parent b567823 commit 97979c3
Show file tree
Hide file tree
Showing 14 changed files with 43 additions and 44 deletions.
2 changes: 1 addition & 1 deletion master/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/core/functions.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
52 changes: 26 additions & 26 deletions master/_modules/torch_xla/core/xla_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down Expand Up @@ -402,7 +402,7 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>


<span class="k">def</span> <span class="nf">parse_xla_device</span><span class="p">(</span><span class="n">device</span><span class="p">):</span>
<span class="n">m</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="sa">r</span><span class="s1">&#39;(CPU|TPU|GPU|ROCM|CUDA|XPU|NEURON):(\d+)$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">)</span>
<span class="n">m</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="sa">r</span><span class="s1">&#39;([A-Z]+):(\d+)$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">)</span>
<span class="k">if</span> <span class="n">m</span><span class="p">:</span>
<span class="k">return</span> <span class="p">(</span><span class="n">m</span><span class="o">.</span><span class="n">group</span><span class="p">(</span><span class="mi">1</span><span class="p">),</span> <span class="nb">int</span><span class="p">(</span><span class="n">m</span><span class="o">.</span><span class="n">group</span><span class="p">(</span><span class="mi">2</span><span class="p">)))</span>

Expand All @@ -411,32 +411,33 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Returns a list of supported devices of a given kind.</span>

<span class="sd"> Args:</span>
<span class="sd"> devkind (string..., optional): If specified, one of `TPU`, `GPU`, `XPU`,</span>
<span class="sd"> `NEURON` or `CPU` (the &#39;GPU&#39; XLA device is currently not implemented).</span>
<span class="sd"> devkind (string..., optional): If specified, a device type such as `TPU`,</span>
<span class="sd"> `CUDA`, `CPU`, or name of custom PJRT device.</span>
<span class="sd"> max_devices (int, optional): The maximum number of devices to be returned of</span>
<span class="sd"> that kind.</span>

<span class="sd"> Returns:</span>
<span class="sd"> The list of device strings.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="c1"># TODO(xiowei replace gpu with cuda): Remove the below if statement after r2.2 release.</span>
<span class="k">if</span> <span class="n">devkind</span> <span class="ow">and</span> <span class="n">devkind</span><span class="o">.</span><span class="n">casefold</span><span class="p">()</span> <span class="o">==</span> <span class="s1">&#39;gpu&#39;</span><span class="p">:</span>
<span class="n">warnings</span><span class="o">.</span><span class="n">warn</span><span class="p">(</span>
<span class="s2">&quot;GPU as a device name is being deprecate. Please replace it with CUDA such as get_xla_supported_devices(devkind=&#39;CUDA&#39;). Similarly, please replace PJRT_DEVICE=GPU with PJRT_DEVICE=CUDA.&quot;</span>
<span class="p">)</span>
<span class="n">devkind</span> <span class="o">=</span> <span class="s1">&#39;CUDA&#39;</span>
<span class="c1"># TODO(wcromar): Remove `devkind` after 2.3 release cut. We no longer support</span>
<span class="c1"># multiple device types.</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">devkind</span><span class="p">:</span>
<span class="n">devices</span> <span class="o">=</span> <span class="n">torch_xla</span><span class="o">.</span><span class="n">_XLAC</span><span class="o">.</span><span class="n">_xla_get_devices</span><span class="p">()</span>
<span class="k">return</span> <span class="p">[</span>
<span class="sa">f</span><span class="s1">&#39;xla:</span><span class="si">{</span><span class="n">i</span><span class="si">}</span><span class="s1">&#39;</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">devices</span><span class="p">[:</span><span class="n">max_devices</span><span class="p">]</span> <span class="k">if</span> <span class="n">max_devices</span> <span class="k">else</span> <span class="n">devices</span><span class="p">)</span>
<span class="p">]</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">warnings</span><span class="o">.</span><span class="n">warn</span><span class="p">(</span><span class="s2">&quot;`devkind` argument is deprecated and will be removed in a &quot;</span>
<span class="s2">&quot;future release.&quot;</span><span class="p">)</span>

<span class="n">xla_devices</span> <span class="o">=</span> <span class="n">_DEVICES</span><span class="o">.</span><span class="n">value</span>
<span class="n">devkind</span> <span class="o">=</span> <span class="p">[</span><span class="n">devkind</span><span class="p">]</span> <span class="k">if</span> <span class="n">devkind</span> <span class="k">else</span> <span class="p">[</span>
<span class="s1">&#39;TPU&#39;</span><span class="p">,</span> <span class="s1">&#39;GPU&#39;</span><span class="p">,</span> <span class="s1">&#39;XPU&#39;</span><span class="p">,</span> <span class="s1">&#39;NEURON&#39;</span><span class="p">,</span> <span class="s1">&#39;CPU&#39;</span><span class="p">,</span> <span class="s1">&#39;CUDA&#39;</span><span class="p">,</span> <span class="s1">&#39;ROCM&#39;</span>
<span class="p">]</span>
<span class="k">for</span> <span class="n">kind</span> <span class="ow">in</span> <span class="n">devkind</span><span class="p">:</span>
<span class="n">kind_devices</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">device</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">xla_devices</span><span class="p">):</span>
<span class="k">if</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="n">kind</span> <span class="o">+</span> <span class="sa">r</span><span class="s1">&#39;:\d+$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">):</span>
<span class="n">kind_devices</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="s1">&#39;xla:</span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">i</span><span class="p">))</span>
<span class="k">if</span> <span class="n">kind_devices</span><span class="p">:</span>
<span class="k">return</span> <span class="n">kind_devices</span><span class="p">[:</span><span class="n">max_devices</span><span class="p">]</span> <span class="k">if</span> <span class="n">max_devices</span> <span class="k">else</span> <span class="n">kind_devices</span></div>
<span class="n">kind_devices</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">device</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">xla_devices</span><span class="p">):</span>
<span class="k">if</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="n">devkind</span> <span class="o">+</span> <span class="sa">r</span><span class="s1">&#39;:\d+$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">):</span>
<span class="n">kind_devices</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="s1">&#39;xla:</span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">i</span><span class="p">))</span>
<span class="k">if</span> <span class="n">kind_devices</span><span class="p">:</span>
<span class="k">return</span> <span class="n">kind_devices</span><span class="p">[:</span><span class="n">max_devices</span><span class="p">]</span> <span class="k">if</span> <span class="n">max_devices</span> <span class="k">else</span> <span class="n">kind_devices</span></div>


<div class="viewcode-block" id="xrt_world_size"><a class="viewcode-back" href="../../../index.html#torch_xla.core.xla_model.xrt_world_size">[docs]</a><span class="k">def</span> <span class="nf">xrt_world_size</span><span class="p">(</span><span class="n">defval</span><span class="o">=</span><span class="mi">1</span><span class="p">):</span>
Expand Down Expand Up @@ -521,8 +522,8 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> n (int, optional): The specific instance (ordinal) to be returned. If</span>
<span class="sd"> specified, the specific XLA device instance will be returned. Otherwise</span>
<span class="sd"> the first device of `devkind` will be returned.</span>
<span class="sd"> devkind (string..., optional): If specified, one of `TPU`, `CUDA`, `XPU`</span>
<span class="sd"> `NEURON`, `ROCM` or `CPU`.</span>
<span class="sd"> devkind (string..., optional): If specified, device type such as `TPU`,</span>
<span class="sd"> `CUDA`, `CPU`, or custom PJRT device. Deprecated.</span>

<span class="sd"> Returns:</span>
<span class="sd"> A `torch.device` with the requested instance.</span>
Expand Down Expand Up @@ -560,8 +561,7 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> real device.</span>

<span class="sd"> Returns:</span>
<span class="sd"> A string representation of the hardware type (`CPU`, `TPU`, `XPU`, `NEURON`, `GPU`, `CUDA`, `ROCM`)</span>
<span class="sd"> of the given device.</span>
<span class="sd"> A string representation of the hardware type of the given device.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">real_device</span> <span class="o">=</span> <span class="n">_xla_real_device</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="k">return</span> <span class="n">real_device</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">&#39;:&#39;</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span></div>
Expand Down Expand Up @@ -888,8 +888,8 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="c1"># _all_gather_using_all_reduce does not support list of tensors as input</span>
<span class="k">if</span> <span class="n">pin_layout</span> <span class="ow">and</span> <span class="n">output</span> <span class="o">==</span> <span class="kc">None</span> <span class="ow">and</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">value</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">):</span>
<span class="c1"># There is not an easy way to pin the all_gather layout on TPU, GPU and NEURON,</span>
<span class="c1"># use all_reduce based all_gather for this purpose.</span>
<span class="c1"># There is not an easy way to pin the all_gather layout, so use all_reduce</span>
<span class="c1"># based all_gather for this purpose.</span>
<span class="k">return</span> <span class="n">_all_gather_using_all_reduce</span><span class="p">(</span>
<span class="n">value</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="n">dim</span><span class="p">,</span> <span class="n">groups</span><span class="o">=</span><span class="n">groups</span><span class="p">,</span> <span class="n">pin_layout</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>

Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/distributed/parallel_loader.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/utils/serialization.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/utils/utils.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
13 changes: 6 additions & 7 deletions master/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down Expand Up @@ -826,8 +826,8 @@ <h1>PyTorch/XLA API<a class="headerlink" href="#pytorch-xla-api" title="Permalin
<li><p><strong>n</strong> (<em>python:int</em><em>, </em><em>optional</em>) – The specific instance (ordinal) to be returned. If
specified, the specific XLA device instance will be returned. Otherwise
the first device of <cite>devkind</cite> will be returned.</p></li>
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, one of <cite>TPU</cite>, <cite>CUDA</cite>, <cite>XPU</cite>
<cite>NEURON</cite>, <cite>ROCM</cite> or <cite>CPU</cite>.</p></li>
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, device type such as <cite>TPU</cite>,
<cite>CUDA</cite>, <cite>CPU</cite>, or custom PJRT device. Deprecated.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
Expand All @@ -843,8 +843,8 @@ <h1>PyTorch/XLA API<a class="headerlink" href="#pytorch-xla-api" title="Permalin
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, one of <cite>TPU</cite>, <cite>GPU</cite>, <cite>XPU</cite>,
<cite>NEURON</cite> or <cite>CPU</cite> (the ‘GPU’ XLA device is currently not implemented).</p></li>
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, a device type such as <cite>TPU</cite>,
<cite>CUDA</cite>, <cite>CPU</cite>, or name of custom PJRT device.</p></li>
<li><p><strong>max_devices</strong> (<em>python:int</em><em>, </em><em>optional</em>) – The maximum number of devices to be returned of
that kind.</p></li>
</ul>
Expand All @@ -865,8 +865,7 @@ <h1>PyTorch/XLA API<a class="headerlink" href="#pytorch-xla-api" title="Permalin
real device.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p>A string representation of the hardware type (<cite>CPU</cite>, <cite>TPU</cite>, <cite>XPU</cite>, <cite>NEURON</cite>, <cite>GPU</cite>, <cite>CUDA</cite>, <cite>ROCM</cite>)
of the given device.</p>
<dd class="field-even"><p>A string representation of the hardware type of the given device.</p>
</dd>
</dl>
</dd></dl>
Expand Down
2 changes: 1 addition & 1 deletion master/notes/source_of_recompilation.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
Binary file modified master/objects.inv
Binary file not shown.
2 changes: 1 addition & 1 deletion master/py-modindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/search.html
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@


<div class="version">
master (2.2.0+gitebb200b )
master (2.2.0+git050a240 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/searchindex.js

Large diffs are not rendered by default.

0 comments on commit 97979c3

Please sign in to comment.