Skip to content

Commit

Permalink
#14840: use DRAM config for large-size tensors (#15204)
Browse files Browse the repository at this point in the history
### Ticket
Link to Github Issue #14840

### Problem description
larger shapes with L1 mem out of bounds

### What's changed
Adding test case to ttnn.multiply

### Checklist
- [ ] Post commit CI passes
https://github.com/tenstorrent/tt-metal/actions/runs/11985713156/job/33419104115
- [ ] Blackhole Post commit (if applicable)
- [ ] Model regression CI testing passes (if applicable)
- [ ] Device performance regression CI testing passes (if applicable)
- [x] New/Existing tests provide coverage for changes
  • Loading branch information
KalaivaniMCW authored Nov 23, 2024
1 parent 29792c0 commit 5ad4e34
Showing 1 changed file with 18 additions and 0 deletions.
18 changes: 18 additions & 0 deletions tests/ttnn/unit_tests/operations/eltwise/test_mul.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,3 +97,21 @@ def test_multiply_int32_with_scalar(device, input_a, scalar):
output = ttnn.to_torch(output)

assert_with_pcc(torch_output_tensor, output, 0.9999)


# #14840: use DRAM config
@pytest.mark.parametrize("output_memory_config", [ttnn.DRAM_MEMORY_CONFIG])
@pytest.mark.parametrize("scalar", [0.125])
@pytest.mark.parametrize("batch_size", [6, 7, 8])
def test_multiply_with_scalar_sharded(device, scalar, batch_size, output_memory_config):
torch.manual_seed(0)
torch_input_tensor_a = torch.rand((batch_size, 16, 384, 384), dtype=torch.float32)
torch_output_tensor = scalar * torch_input_tensor_a

input_tensor_a = ttnn.from_torch(
torch_input_tensor_a, layout=ttnn.TILE_LAYOUT, memory_config=ttnn.L1_MEMORY_CONFIG, device=device
)
output = ttnn.mul(input_tensor_a, scalar, memory_config=output_memory_config)
output = ttnn.to_torch(output)

assert_with_pcc(torch_output_tensor, output, 0.9999)

0 comments on commit 5ad4e34

Please sign in to comment.