Skip to content

Releases: tenstorrent/tt-metal

v0.53.0-rc11

09 Oct 02:18
9e9dc00
Compare
Choose a tag to compare
v0.53.0-rc11 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11246610563

📦 Uncategorized

  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode improvements r3
  • #0: shortened flash decode tests to avoid potential timeout in fast dispatch
  • #12632: Migrate moreh_layer_norm operation from tt_eager to ttnn
  • #11844: Add dispatch_s for asynchronously sending go signals
  • #12805: Migrate moreh_sum_backward operation from tt_eager to ttnn
  • #13187: revise moreh_mean and moreh_mean_backward
  • #12687: port moreh_group_norm and moreh_group_norm_backward from tt_dnn to ttnn
  • #12694 Refactor moreh_linear and moreh_linear_backward
  • #13246: Remove unary_backward_op.hpp
  • #0: integrate distributed sharded layernrm with llama-tg
  • Add support for matmul 1D having L1 sharded weights
  • #11791: linker script cleanups
  • #0: Add copy sweep
  • #12214: refactor moreh_sgd from deprecated to ttnn
  • [Nightly fast dispatch CI] Fix Llama3.1-8B tests running out of memory
  • Update perf target for one falcon7b config due to CI variation
  • Add bitwise ops sweeps, add gen_rand_bitwise_left_shift function
  • Multiple watcher-related updates
  • #11621: add filler sweeps for expand, fill, split_with_sizes, index_select and .t
  • #13363: Surface job errors where Set up runner does not complete successfully
  • #13127: Remove shape_without_padding() pybinding and usage
  • #11208: Refactor ProgramCache to remove nested type erasure
  • #11208: Slotmap datastructure for creating resource pools
  • #13365: added program caching for page tensor for flash decode
  • Update llama ttft in README.md
  • #0: Add tech report for inf/nan handling
  • #11403: SubMesh Support + Porting/Stamping T3K Tests to Galaxy
  • Add new ttnn sweeps
  • Remove profiler core flat id look up
  • #11789: Fix firmware/kernel padding/alignment
  • #8534: Publish tt-metal docs to the central site
  • #0: Sweeps Logger Fixes
  • Mchiou/13011 dump firmware and system logs if ci jobs fail
  • #13419: Handle cases where GitHub timeout on a job cuts off the data in a test in a Junit XML, leaving no data to use
  • #12605: Add governor notes and move models steps into separate steps
  • #13254: switch pgm dispatch to use trace, add it to CI
  • #10016: jit_build: link substitutes, tdma_xmov, noc
  • #11208: Slotmap datastructure for creating resource pools
  • #0: Dispatch_s + Launch Message Ring Buffer Bugfixes
  • #0: Reduce copy sweep to cover only bf16
  • #13394: Galaxy 2cq support
  • #0: Fix ncrisc code overflow problem
  • Add more pipelines to top-level "Choose your pipeline" workflows
  • #13127: Update ttnn::Shape struct to maintain API parity with existing tt::tt_metal::LegacyShape usages
  • #0: SegFormer on n150 - functional
  • #7091: Add git commit runbook to CONTRIBUTING.md
  • Moving DRAM/L1_UNRESERVED_BASE into HAL
  • #11401: Add supplementary tensor parallel example to regression
  • #13432: fix t3k ethernet tests
  • #0: fix mesh device fixture selection for test_distributed_layernorm
  • #13454: Refactor API for MeshDevice::enable_async
  • deprecate JAWBRIDGE
  • #8488: Update activation list in doc
Read more

v0.53.0-rc10

08 Oct 02:20
58e455b
Compare
Choose a tag to compare
v0.53.0-rc10 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11226937372

📦 Uncategorized

  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode improvements r3
  • #0: shortened flash decode tests to avoid potential timeout in fast dispatch
  • #12632: Migrate moreh_layer_norm operation from tt_eager to ttnn
  • #11844: Add dispatch_s for asynchronously sending go signals
  • #12805: Migrate moreh_sum_backward operation from tt_eager to ttnn
  • #13187: revise moreh_mean and moreh_mean_backward
  • #12687: port moreh_group_norm and moreh_group_norm_backward from tt_dnn to ttnn
  • #12694 Refactor moreh_linear and moreh_linear_backward
  • #13246: Remove unary_backward_op.hpp
  • #0: integrate distributed sharded layernrm with llama-tg
  • Add support for matmul 1D having L1 sharded weights
  • #11791: linker script cleanups
  • #0: Add copy sweep
  • #12214: refactor moreh_sgd from deprecated to ttnn
  • [Nightly fast dispatch CI] Fix Llama3.1-8B tests running out of memory
  • Update perf target for one falcon7b config due to CI variation
  • Add bitwise ops sweeps, add gen_rand_bitwise_left_shift function
  • Multiple watcher-related updates
  • #11621: add filler sweeps for expand, fill, split_with_sizes, index_select and .t
  • #13363: Surface job errors where Set up runner does not complete successfully
  • #13127: Remove shape_without_padding() pybinding and usage
  • #11208: Refactor ProgramCache to remove nested type erasure
  • #11208: Slotmap datastructure for creating resource pools
  • #13365: added program caching...
Read more

v0.53.0-rc9

07 Oct 02:19
f85ceb7
Compare
Choose a tag to compare
v0.53.0-rc9 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11207084989

📦 Uncategorized

  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode improvements r3
  • #0: shortened flash decode tests to avoid potential timeout in fast dispatch
  • #12632: Migrate moreh_layer_norm operation from tt_eager to ttnn
  • #11844: Add dispatch_s for asynchronously sending go signals
  • #12805: Migrate moreh_sum_backward operation from tt_eager to ttnn
  • #13187: revise moreh_mean and moreh_mean_backward
  • #12687: port moreh_group_norm and moreh_group_norm_backward from tt_dnn to ttnn
  • #12694 Refactor moreh_linear and moreh_linear_backward
  • #13246: Remove unary_backward_op.hpp
  • #0: integrate distributed sharded layernrm with llama-tg
  • Add support for matmul 1D having L1 sharded weights
  • #11791: linker script cleanups
  • #0: Add copy sweep
  • #12214: refactor moreh_sgd from deprecated to ttnn
  • [Nightly fa...
Read more

v0.53.0-rc8

05 Oct 02:19
3d33e8d
Compare
Choose a tag to compare
v0.53.0-rc8 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11189307373

📦 Uncategorized

  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode improvements r3
  • #0: shortened flash decode tests to avoid potential timeout in fast dispatch
  • #12632: Migrate moreh_layer_norm operation from tt_eager to ttnn
  • #11844: Add dispatch_s for asynchronously sending go signals
  • #12805: Migrate moreh_sum_backward operation from tt_eager to ttnn
  • #13187: revise moreh_mean and `moreh_me...
Read more

v0.53.0-rc7

04 Oct 20:26
Compare
Choose a tag to compare
v0.53.0-rc7 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11185971169

📦 Uncategorized

  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode imp...
Read more

v0.53.0-rc6

03 Oct 02:19
Compare
Choose a tag to compare
v0.53.0-rc6 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11154339885

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
    ...
Read more

v0.53.0-rc5

02 Oct 02:17
ef33315
Compare
Choose a tag to compare
v0.53.0-rc5 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11136291724

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
    ...
Read more

v0.53.0-rc4

01 Oct 02:18
e1f2f08
Compare
Choose a tag to compare
v0.53.0-rc4 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11116998205

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core

v0.53.0-rc3

30 Sep 02:18
Compare
Choose a tag to compare
v0.53.0-rc3 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11097804939

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one

v0.53.0-rc2

28 Sep 02:17
849b3e6
Compare
Choose a tag to compare
v0.53.0-rc2 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11079959460

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow