Skip to content

Commit

Permalink
#14990: Address feedback in Programming Mesh of Devices Tech Report (#…
Browse files Browse the repository at this point in the history
…14991)

### Ticket
[Link to Github
Issue](#14990)

### Problem description
Address feedback on typos and suggestions.

### What's changed
Some minor fixes to typos and add more description to the
line-all-gather operation.


### Checklist
- [ ] Post commit CI passes
- [ ] Blackhole Post commit (if applicable)
- [ ] Model regression CI testing passes (if applicable)
- [ ] Device performance regression CI testing passes (if applicable)
- [ ] New/Existing tests provide coverage for changes
  • Loading branch information
cfjchu authored Nov 13, 2024
1 parent 16123a1 commit a8ceec9
Showing 1 changed file with 20 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,8 @@ ttnn.Tensor([[[[ 2.00000, 2.00000, ..., 2.00000, 2.00000],

We now see that the following:

- 32x32 chunk with elements of 1.0 is residing in Device 11 DRAM
- 32x32 chunk with elements of 2.0 is residing in Device 10 DRAM
- 32x32 chunk with elements of 1.0 is residing in Device 0 DRAM
- 32x32 chunk with elements of 2.0 is residing in Device 1 DRAM

We can also visualize this tensor distributed across our MeshDevice. The visualization will color devices that have shards resident to the device.

Expand All @@ -196,7 +196,7 @@ ttnn.visualize_mesh_device(mesh_device, tensor=mesh_tensor)
>
DeviceMesh(rows=1, cols=2):
┌──────────────────────────────┬──────────────────────────────┐
│ Dev. ID: 11 │ Dev. ID: 10
│ Dev. ID: 0 │ Dev. ID: 1
│ (0, 0) │ (0, 1) │
│ ttnn.Shape([1, 1, 32, 32]) │ ttnn.Shape([1, 1, 32, 32]) │
└──────────────────────────────┴──────────────────────────────┘
Expand Down Expand Up @@ -299,11 +299,11 @@ import ttnn
mesh_device = ttnn.open_mesh_device(ttnn.MeshShape(2, 4), mesh_type=ttnn.MeshType.Ring)

# Construct test tensor of data; 8 chunks of 32x32
torch_tensor = torch.rand((1,1,32,128), dtype=torch.bfloat16)
torch_tensor = torch.rand((1,1,32,256), dtype=torch.bfloat16)

# Convert to ttnn.Tensor, tilize and move onto devices across mesh DRAM
mesh_tensor = ttnn.from_torch(
torch_input_tensor,
torch_tensor,
layout=ttnn.TILE_LAYOUT,
device=mesh_device,
mesh_mapper=ttnn.ShardTensorToMesh(mesh_device, dim=3),
Expand All @@ -316,19 +316,22 @@ output_tensor = ttnn.all_gather(mesh_tensor, dim=3, num_links=1)

#### 5.2.2 Programming Example: All-Gather (Line)

This time, we'll issue the CCL Line All-Gather operation along the cluster y-axis:
Here we issue a Line All-Gather operation along the cluster-axis 0 (y-dimension), where the y-dimension is the height of the cluster.
This kicks off four parallel CCL Line All-Gather operations, one for each column in the cluster. Each "line" is a list of two devices.

<img src="images/image5_line_all_gather.png" style="width:500px;"/>

*Figure 6: Line All-Gather execution on 2x4 MeshDevice *
*Figure 6: Line All-Gather execution on 2x4 MeshDevice*

The result tensor for each device in the column is the concatenation in `dim=3` for each device in the column. The per-device tensor shape is `[1, 1, 32, 32]` before the operation and `[1, 1, 32, 64]` after the operation.

```py
import ttnn

mesh_device = ttnn.open_mesh_device(ttnn.MeshShape(2, 4), mesh_type=ttnn.MeshType.Ring)

# Construct test tensor of data; 8 chunks of 32x32
torch_tensor = torch.rand((1,1,32,128), dtype=torch.bfloat16)
torch_tensor = torch.rand((1,1,32,256), dtype=torch.bfloat16)

# Convert to ttnn.Tensor, tilize and move onto devices across mesh DRAM
mesh_tensor = ttnn.from_torch(
Expand All @@ -339,7 +342,15 @@ mesh_tensor = ttnn.from_torch(
)

# Execute Line All-Gather on the tensor
output_tensor = ttnn.all_gather(mesh_tensor, dim=3, cluster_axis=0, mesh_device=mesh_device, topology=ttnn.Topology.Linear)
output_tensor = ttnn.all_gather(
mesh_tensor,
dim=3,
cluster_axis=0,
mesh_device=mesh_device,
topology=ttnn.Topology.Linear,
)

ttnn.close_mesh_device(mesh_device)
```


Expand Down

0 comments on commit a8ceec9

Please sign in to comment.