Skip to content

v0.53.0-rc9

Pre-release
Pre-release
Compare
Choose a tag to compare
@github-actions github-actions released this 07 Oct 02:19
· 1455 commits to main since this release
f85ceb7

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11207084989

📦 Uncategorized

  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one
  • #13242: Cleanup set-5 unary backward ops
  • [skip ci] Update CODEOWNERS for TT-NN
  • #13084: fix return vector optional tensor with launch_op
  • #12757: update math function for ops
  • #11512: Added sweep for ttnn.bcast
  • #0: update all-gather tests to remove all_devices test fixture
  • Llama device perf optimizations
  • Tensor-parallel Llama3.1 8b bringup on n300
  • [skip ci] Add last update date to LLM table in README
  • #13285: Add arch tag for galaxy workflows that didn't have it because a) we should specify and b) we need it for data collection
  • #0: Optimize untilize_with_unpad for W 16
  • Update slack notification owner for t3k-model-perf-falcon7b
  • #12040: add transpose trace sweeps
  • Divanovic/llama tg demo
  • #0: Fix bug in perplexity script for Llama
  • #0: Update cast in ncrisc BH init code
  • #0: Move remote chip event synchronization to dispatch core
  • Vanilla Unet conv unit_test
  • #11740: Extend post commit coverage and add sweep test
  • #13269: Revise moreh_norm, moreh_norm_backward operations
  • #13140: Cleanup Binary Backward ops
  • #13315: Revise moreh_bmm, moreh_bmm_backward operations
  • #0: TG Llama3-70b - fix frequent tests
  • Revert "#11962: remove uint8 unpack reconfig code"
  • Llama318b continuous batching + Paged Attention Support
  • #0: Remove demo output files from Llama3.1-8B
  • #11592: use the semaphore indices returned by CreateSemaphore
  • #9370: removed ndpcc work around and debug code in sdpa decode and re-enabled CI
  • #0: Bump trace region size to 20MB for T3K LLAMA2
  • Not holding state for freshening profiler logs
  • #13136: Consolidate all_gather and line_all_gather to common api
  • #11005: Added CreateKernelFromString()
  • #11622: sweep concat traces
  • #0: Bump ttnn bert perf threshold to account for recent refactoring
  • #0: fix CCL nightly and frequent test reqression suites
  • #13142: Add documentation for device ops, memory config
  • #13128: Add cmake options to control what tests get built
  • [skip ci] Update CODEOWNERS for CMakeLists.txt
  • Update matrix_engine.md
  • #13258: build_metal.sh enhancements
  • Flash decode improvements r3
  • #0: shortened flash decode tests to avoid potential timeout in fast dispatch
  • #12632: Migrate moreh_layer_norm operation from tt_eager to ttnn
  • #11844: Add dispatch_s for asynchronously sending go signals
  • #12805: Migrate moreh_sum_backward operation from tt_eager to ttnn
  • #13187: revise moreh_mean and moreh_mean_backward
  • #12687: port moreh_group_norm and moreh_group_norm_backward from tt_dnn to ttnn
  • #12694 Refactor moreh_linear and moreh_linear_backward
  • #13246: Remove unary_backward_op.hpp
  • #0: integrate distributed sharded layernrm with llama-tg
  • Add support for matmul 1D having L1 sharded weights
  • #11791: linker script cleanups
  • #0: Add copy sweep
  • #12214: refactor moreh_sgd from deprecated to ttnn
  • [Nightly fast dispatch CI] Fix Llama3.1-8B tests running out of memory
  • Update perf target for one falcon7b config due to CI variation
  • Add bitwise ops sweeps, add gen_rand_bitwise_left_shift function
  • Multiple watcher-related updates
  • #11621: add filler sweeps for expand, fill, split_with_sizes, index_select and .t
  • #13363: Surface job errors where Set up runner does not complete successfully
  • #13127: Remove shape_without_padding() pybinding and usage
  • #11208: Refactor ProgramCache to remove nested type erasure
  • #11208: Slotmap datastructure for creating resource pools
  • #13365: added program caching for page tensor for flash decode
  • Update llama ttft in README.md
  • #0: Add tech report for inf/nan handling
  • #11403: SubMesh Support + Porting/Stamping T3K Tests to Galaxy
  • Add new ttnn sweeps
  • Remove profiler core flat id look up
  • #11789: Fix firmware/kernel padding/alignment
  • #8534: Publish tt-metal docs to the central site
  • #0: Sweeps Logger Fixes
  • Mchiou/13011 dump firmware and system logs if ci jobs fail
  • #13419: Handle cases where GitHub timeout on a job cuts off the data in a test in a Junit XML, leaving no data to use
  • #12605: Add governor notes and move models steps into separate steps
  • #13254: switch pgm dispatch to use trace, add it to CI
  • #10016: jit_build: link substitutes, tdma_xmov, noc
  • #11208: Slotmap datastructure for creating resource pools
  • #0: Dispatch_s + Launch Message Ring Buffer Bugfixes
  • #0: Reduce copy sweep to cover only bf16
  • #13394: Galaxy 2cq support
  • #0: Fix ncrisc code overflow problem
  • Add more pipelines to top-level "Choose your pipeline" workflows
  • #13127: Update ttnn::Shape struct to maintain API parity with existing tt::tt_metal::LegacyShape usages
  • #0: SegFormer on n150 - functional
  • #7091: Add git commit runbook to CONTRIBUTING.md
  • Moving DRAM/L1_UNRESERVED_BASE into HAL
  • #11401: Add supplementary tensor parallel example to regression
  • #13432: fix t3k ethernet tests
  • #0: fix mesh device fixture selection for test_distributed_layernorm
  • #13454: Refactor API for MeshDevice::enable_async
  • deprecate JAWBRIDGE
  • #8488: Update activation list in doc
  • #13424: Add documentation for opt output tensor and qid
  • #8428: Update sweep config and doc for polyval
  • #7712: Update elu, erf variant sweep config and doc
  • #7961: Update logical or doc and sweep config
  • Llama 3.1 8b DRAM-shard the LM head, 23.1 t/s/u
  • #12559: add ttnn implementation for convnet_mnist model
  • #13143: Add documentation for core, set_printoptions ops
  • #13144: Add documentation for tensor creation ops, matmul ops
  • Jvega/readme changes
  • #0: TG-Llama3-70b - Add compilation step to demo
  • TG Llama3-70b prefill frequent tests enabled
  • #11791: proper bss, stack only on firmware
  • Add more eltwise unary ops
  • #11307: Remove l1_buffer
  • Fix composite ops asserting on perf report generation
  • #11791: Implement Elf reading
  • #13482: Resolve 2CQ Trace Hangs on TG
  • Add DPRINT support for CB rd/wr pointers from BRISC/NCRISC
  • Refactor TT-NN / TT-Metal Mesh/Multi-device related into separate subdirectory
  • #13127: Add get_logical_shape/get_padded_shape to Tensor
  • #0: update CODEOWNERS for distributed subdirectories
  • #13127: Add simple tensor creation gtest
  • Fix compilation of test_create_tensor.cpp
  • #0: add is_ci_env to segformer model
  • New tests and updates of ttnn sweeps
  • #11307: Remove l1_data section