Skip to content

Commit

Permalink
Renaming PyBuda\Buda occurances to Forge in TT-MLIR repo (#595)
Browse files Browse the repository at this point in the history
  • Loading branch information
sdjordjevicTT authored Sep 4, 2024
1 parent a75fcf3 commit 95b2a90
Show file tree
Hide file tree
Showing 5 changed files with 12 additions and 12 deletions.
4 changes: 2 additions & 2 deletions docs/src/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,8 +211,8 @@ If you get the following error, it means you need to install clang which you can
### `sfpi`, `trisc`, `ncrisc` build failure

```
pybuda/third_party/tt-mlir/third_party/tt-metal/src/tt-metal/tt_metal/third_party/sfpi/compiler/bin/riscv32-unknown-elf-g++: 1: version: not found
pybuda/third_party/tt-mlir/third_party/tt-metal/src/tt-metal/tt_metal/third_party/sfpi/compiler/bin/riscv32-unknown-elf-g++: 2: oid: not found
tt-forge-fe/third_party/tt-mlir/third_party/tt-metal/src/tt-metal/tt_metal/third_party/sfpi/compiler/bin/riscv32-unknown-elf-g++: 1: version: not found
tt-forge-fe/third_party/tt-mlir/third_party/tt-metal/src/tt-metal/tt_metal/third_party/sfpi/compiler/bin/riscv32-unknown-elf-g++: 2: oid: not found
size: '1961632': No such file
size: '1961632': No such file
size: '1961632': No such file
Expand Down
2 changes: 1 addition & 1 deletion docs/src/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ level of complexity downwards for the bottom, we will define a very
aggressive TTNN backend for the MVP.
Desired Optimization List:

- BUDA (frontend)
- Forge-FE (frontend)

- Graph Optimizations, Constant Folding, Operation Fusion

Expand Down
14 changes: 7 additions & 7 deletions docs/src/specs/runtime-stitching.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ between the compiler and the runtime.

### Simple Example
```
mod_a = pybuda.compile(PyTorch_module_a)
mod_b = pybuda.compile(PyTorch_module_b)
mod_a = forge.compile(PyTorch_module_a)
mod_b = forge.compile(PyTorch_module_b)
for i in range(10):
outs_a = mod_a(ins_a)
Expand All @@ -26,15 +26,15 @@ for i in range(10):
`mod_a` it should be completely unaware that `mod_b` will take place and vice-versa.
In order to achieve this we propose a new runtime concept called stitching:

- pybuda invokes compile step for `mod_a`, tt-mlir compiler determines where the
- forge invokes compile step for `mod_a`, tt-mlir compiler determines where the
inputs (`ins_a`) should live, host, device dram, device l1. tt-mlir returns
metadata to pybuda describing where it wants the tensors to reside before invoking
metadata to forge describing where it wants the tensors to reside before invoking
flatbuffer submission.
- pybuda invokes compile step for `mod_b`, same happens as bullet 1
- `mod_a` is invoked at runtime, pybuda runtime needs to inspect the compiler metadata
- forge invokes compile step for `mod_b`, same happens as bullet 1
- `mod_a` is invoked at runtime, forge runtime needs to inspect the compiler metadata
to determine where the tensors should live. Runtime manually invokes a new data
copy command to get the tenors to the correct memory space / correct memory address.
- pybuda runtime invokes `mod_a` program submit
- forge runtime invokes `mod_a` program submit
- `mod_b` is invoked at runtime, this time it might be that the compiler left
the tensor outputs in L1, so no data copy is needed to start running `mod_b`
since the inputs are already in the correct location.
Expand Down
2 changes: 1 addition & 1 deletion test/ttmlir/Dialect/TTNN/multiple_add_with_loc.mlir
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// RUN: ttmlir-opt --ttir-to-ttnn-backend-pipeline %s | FileCheck %s
#any_device = #tt.operand_constraint<dram|l1|scalar|tile|any_device|any_device_tile>
#loc = loc("test_ops.py:17_0_0":0:0)
module @pybuda_graph attributes {} {
module attributes {} {
func.func @main(%arg0: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0), %arg1: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0), %arg2: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0)) -> (tensor<1x32x32xf32>, tensor<1x32x32xf32>) {
// CHECK: #[[LAYOUT_1:.*]] = #tt.layout<(d0, d1, d2) -> (d0 * 32 + d1, d2), undef, <8x8>, memref<4x4xf32, #dram>, interleaved>
%0 = tensor.empty() : tensor<1x32x32xf32> loc(#loc5)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// RUN: ttmlir-opt --ttir-to-ttnn-backend-pipeline="override-grid-sizes=add_1_0=4x4,add_2_0=4x4" %s | FileCheck %s
#any_device = #tt.operand_constraint<dram|l1|scalar|tile|any_device|any_device_tile>
#loc = loc("test_ops.py:17_0_0":0:0)
module @pybuda_graph attributes {} {
module attributes {} {
func.func @main(%arg0: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0), %arg1: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0), %arg2: tensor<1x32x32xf32> loc("test_ops.py:17_0_0":0:0)) -> (tensor<1x32x32xf32>, tensor<1x32x32xf32>) {
// CHECK: #[[LAYOUT_0:.*]] = #tt.layout<(d0, d1, d2) -> (d0 * 32 + d1, d2), undef, <8x8>, memref<4x4xf32, #system>, none_layout>
// CHECK: #[[LAYOUT_1:.*]] = #tt.layout<(d0, d1, d2) -> (d0 * 32 + d1, d2), undef, <4x4>, memref<8x8xf32, #dram>, interleaved>
Expand Down

0 comments on commit 95b2a90

Please sign in to comment.