Skip to content

Commit

Permalink
Update doc from commit a1bc3c6
Browse files Browse the repository at this point in the history
  • Loading branch information
torchxlabot2 committed Oct 12, 2023
1 parent f7c3024 commit 0c649eb
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 11 deletions.
12 changes: 5 additions & 7 deletions master/_modules/torch_xla/core/xla_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,7 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>


<span class="k">def</span> <span class="nf">parse_xla_device</span><span class="p">(</span><span class="n">device</span><span class="p">):</span>
<span class="n">m</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="sa">r</span><span class="s1">&#39;(CPU|TPU|GPU|ROCM|CUDA|XPU|NEURON):(\d+)$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">)</span>
<span class="n">m</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">match</span><span class="p">(</span><span class="sa">r</span><span class="s1">&#39;(CPU|TPU|GPU|XPU|NEURON):(\d+)$&#39;</span><span class="p">,</span> <span class="n">device</span><span class="p">)</span>
<span class="k">if</span> <span class="n">m</span><span class="p">:</span>
<span class="k">return</span> <span class="p">(</span><span class="n">m</span><span class="o">.</span><span class="n">group</span><span class="p">(</span><span class="mi">1</span><span class="p">),</span> <span class="nb">int</span><span class="p">(</span><span class="n">m</span><span class="o">.</span><span class="n">group</span><span class="p">(</span><span class="mi">2</span><span class="p">)))</span>

Expand All @@ -407,9 +407,7 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> The list of device strings.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">xla_devices</span> <span class="o">=</span> <span class="n">_DEVICES</span><span class="o">.</span><span class="n">value</span>
<span class="n">devkind</span> <span class="o">=</span> <span class="p">[</span><span class="n">devkind</span><span class="p">]</span> <span class="k">if</span> <span class="n">devkind</span> <span class="k">else</span> <span class="p">[</span>
<span class="s1">&#39;TPU&#39;</span><span class="p">,</span> <span class="s1">&#39;GPU&#39;</span><span class="p">,</span> <span class="s1">&#39;XPU&#39;</span><span class="p">,</span> <span class="s1">&#39;NEURON&#39;</span><span class="p">,</span> <span class="s1">&#39;CPU&#39;</span><span class="p">,</span> <span class="s1">&#39;CUDA&#39;</span><span class="p">,</span> <span class="s1">&#39;ROCM&#39;</span>
<span class="p">]</span>
<span class="n">devkind</span> <span class="o">=</span> <span class="p">[</span><span class="n">devkind</span><span class="p">]</span> <span class="k">if</span> <span class="n">devkind</span> <span class="k">else</span> <span class="p">[</span><span class="s1">&#39;TPU&#39;</span><span class="p">,</span> <span class="s1">&#39;GPU&#39;</span><span class="p">,</span> <span class="s1">&#39;XPU&#39;</span><span class="p">,</span> <span class="s1">&#39;NEURON&#39;</span><span class="p">,</span> <span class="s1">&#39;CPU&#39;</span><span class="p">]</span>
<span class="k">for</span> <span class="n">kind</span> <span class="ow">in</span> <span class="n">devkind</span><span class="p">:</span>
<span class="n">kind_devices</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">device</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">xla_devices</span><span class="p">):</span>
Expand Down Expand Up @@ -501,8 +499,8 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> n (int, optional): The specific instance (ordinal) to be returned. If</span>
<span class="sd"> specified, the specific XLA device instance will be returned. Otherwise</span>
<span class="sd"> the first device of `devkind` will be returned.</span>
<span class="sd"> devkind (string..., optional): If specified, one of `TPU`, `CUDA`, `XPU` </span>
<span class="sd"> `NEURON`, `ROCM` or `CPU`.</span>
<span class="sd"> devkind (string..., optional): If specified, one of `TPU`, `GPU`, `XPU` </span>
<span class="sd"> `NEURON` or `CPU`.</span>

<span class="sd"> Returns:</span>
<span class="sd"> A `torch.device` with the requested instance.</span>
Expand Down Expand Up @@ -537,7 +535,7 @@ <h1>Source code for torch_xla.core.xla_model</h1><div class="highlight"><pre>
<span class="sd"> real device.</span>

<span class="sd"> Returns:</span>
<span class="sd"> A string representation of the hardware type (`CPU`, `TPU`, `XPU`, `NEURON`, `GPU`, `CUDA`, `ROCM`) </span>
<span class="sd"> A string representation of the hardware type (`CPU`, `TPU`, `XPU`, `NEURON`, `GPU`) </span>
<span class="sd"> of the given device.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">real_device</span> <span class="o">=</span> <span class="n">_xla_real_device</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
Expand Down
6 changes: 3 additions & 3 deletions master/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -777,8 +777,8 @@ <h1>PyTorch/XLA API<a class="headerlink" href="#pytorch-xla-api" title="Permalin
<li><p><strong>n</strong> (<em>python:int</em><em>, </em><em>optional</em>) – The specific instance (ordinal) to be returned. If
specified, the specific XLA device instance will be returned. Otherwise
the first device of <cite>devkind</cite> will be returned.</p></li>
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, one of <cite>TPU</cite>, <cite>CUDA</cite>, <cite>XPU</cite>
<cite>NEURON</cite>, <cite>ROCM</cite> or <cite>CPU</cite>.</p></li>
<li><p><strong>devkind</strong> (<em>string...</em><em>, </em><em>optional</em>) – If specified, one of <cite>TPU</cite>, <cite>GPU</cite>, <cite>XPU</cite>
<cite>NEURON</cite> or <cite>CPU</cite>.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
Expand Down Expand Up @@ -816,7 +816,7 @@ <h1>PyTorch/XLA API<a class="headerlink" href="#pytorch-xla-api" title="Permalin
real device.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p>A string representation of the hardware type (<cite>CPU</cite>, <cite>TPU</cite>, <cite>XPU</cite>, <cite>NEURON</cite>, <cite>GPU</cite>, <cite>CUDA</cite>, <cite>ROCM</cite>)
<dd class="field-even"><p>A string representation of the hardware type (<cite>CPU</cite>, <cite>TPU</cite>, <cite>XPU</cite>, <cite>NEURON</cite>, <cite>GPU</cite>)
of the given device.</p>
</dd>
</dl>
Expand Down
Loading

0 comments on commit 0c649eb

Please sign in to comment.