Skip to content

v0.53.0-rc3

Pre-release
Pre-release
Compare
Choose a tag to compare
@github-actions github-actions released this 30 Sep 02:18
· 1597 commits to main since this release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/11097804939

📦 Uncategorized

  • #12883: Add initial unit tests for N300
  • #12499: Migrate moreh_norm, moreh_norm_backward operations from tt_eager to ttnn
  • #12321: Migrate moreh_bmm, moreh_bmm_backward operations from tt_eager to ttnn
  • Add more eltwise sweeps, add new functions in sweep_framework/utils.py
  • #12690: Port moreh_softmax and moreh_softmax_backward to ttnn
  • #0: Bump falcon7b device perf test because we have a real bump
  • Aliu/tech reports
  • #11332: Move ttnn/examples ttnn/ttnn/examples so we can enable directly calling them for users, but not meant to be part of ttnn API
  • Add sweeps for sign, deg2rad, rad2deg, relu6
  • Revert "#10016: jit_build: link substitutes, tdma_xmov, noc"
  • #12952: Update test_ccl_on_tg.cpp to work on TGG as well as TG
  • [skip ci] #0: ViT report edits
  • #12879: Use () so that workflow_call actually captures the call when we trigger off completed workflow runs and add them to workflows to properly capture
  • [skip ci] #13019 Create remove-stale-branches.yaml
  • #13019 Update remove-stale-branches.yaml
  • Add tiny tile support for Tensor, matmul
  • [skip ci] #13019 Add default recipient
  • build tt metal in docker in CI
  • Revert "build tt metal in docker in CI"
  • [skip ci] #0: ViT tech report
  • Mchiou/11762 build tt metal in docker
  • #13013: Added tests to run in TGG unit tests workflow
  • [skip ci] #13019 Update remove-stale-branches.yaml
  • Mchiou/0 fix docker build storage
  • #11531: Autogenerate API rst stub files, add summary table on API page
  • Add --no-advice to perf report, small fixes
  • preserve fp32 precision
  • #0: Remove unnecessary using declarations
  • #12775: Cleanup docker run action
  • #0: Update to gcc-12.x, take 2
  • #12945: update galaxy/n150 eth dispatch cores
  • #13070: fix SD
  • Update Llama codeowners
  • #0: fix uncaught edge case in page update cache and added it in test suit
  • #12754: Migrate moreh_nll_loss operations (reduced and unreduced) from tt_eager to ttnn
  • #8633:Add TT_Fatal for full and ones op
  • #12985: Expose ttnn::ccl::Topology at python level
  • #12556: Add queue_id and optional output tensors to assign_bw
  • Support for increasing 1-D row major int32 tensors by one
  • #12828: update ttnn matmul doc string
  • Llama 3.1 8b DRAM-sharded matmuls
  • Update perf and latest features for llm models (Sept 23)
  • Work around CSV reporting 64 cores for DRAM-sharded matmuls
  • #0: Fix PCC to correct bound
  • #0: Simplify llrt/memory API
  • #0: Fix caching race
  • #0: Fix merge error with 80d6e48
  • #11004: moreh: use env var for kernel src search path
  • #12328: Fix Llama3.1-8B MLP tests running out of L1
  • #11769: extend support for transposing/permuting bfloat8 tensors on n…
  • #12141: Fixed matmul shape validation issue
  • #0: move BufferType to device kernel accessible location
  • #12658: update sweep export script and create initial graph script
  • #0: ViT on WH
  • [skip ci] Update README.md (ViT on n150)
  • #0: Bump resnet50 ttnn 2cq compile time because it regressed likely due to gcc risc-v upgrade
  • #0: Update WH Resnet compile time threshold
  • Flash decode improvements r2
  • #0: added support for n_heads > 1 for page cache prefill
  • #0: Bump mamba compile time as it's not that important and the model is still performant, need to unblock people…
  • #0: move Layout enum to device accessible location
  • #0: Bump distilbert compile time because it keeps failing on it
  • #13088: Cleanup set-1 unary backward ops
  • #10033: Add forward support for gcd and lcm
  • #13150: Cleanup LCM, GCD Macro
  • Llama3.1 8b demo with tracing
  • #13058: update matmul bias size validation
  • #0: (MINOR) Update to v0.53.0
  • #0: try with python 3.10
  • #13145: Temporarily revert Resnet on Galaxy to use slower config for first conv to avoid hangs
  • #0: Remove unnecessary ProgramDeleter
  • #13127: Switch python get_legacy_shape to shape.with_tile_padding()
  • Add sweeps for remainder, fmod, minimum, maximum, logical_and eltwise ops, rename eltwise sweeps
  • Fix Yolo tests after updating weights shape in conv2d
  • #13172: Use lower python version and cache dependencies
  • #11830: Move l1/dram/pcie alignment into HAL
  • #13014: optimize slice by adding a 4D uint32_t array implementation o…
  • Add llk support for cumsum and transpose_wh_dest with relevant tests
  • Add numeric stable option for softmax
  • #12878: Add links to job and pipeline for CI/CD analytics
  • #0: fix CCL nightly tests
  • #12919: Cleanup set-2 Unary Backward ops
  • #8865: Add sharded tensor support to dispatch profile infra
  • #0: Update CODEOWNERS for ttnn/ttnn/operations/moreh.py
  • #13137: Revise moreh_arange operation
  • #13095: Refactor moreh_nll_loss operations
  • #10439: ttnn implementation of vgg model
  • #13175: Add new category to summary table in sweeps query tool
  • #5174: Disable command buffer FIFOs on BH
  • Update CODEOWNERS
  • Fix demo_trace and add on-device argmax to test_llama_perf
  • #0: fix program caching bug in post_all_gather
  • Do not require test dispatch workflow to run on "in-service" runners
  • Add description to describe typical labels one could use in test dispatch workflow
  • Add an option to split dprint output by risc
  • Add new "choose your own pipeline" workflow
  • #11962: remove uint8 unpack reconfig code
  • Add tg and tgg frequent tests to "Choose your pipeline" workflow
  • Add options to select a subset of pipelines that a user would like to run
  • Update names of perf-models and perf-device-models jobs
  • #13086: Revising moreh_getitem
  • Sweeps: log, log1p, log2, log10
  • #12721: Cleanup set-3 Unary Backward ops
  • #13212: Cleanup set-4 Unary backward ops
  • Add initial (very limited) support for line reduce scatter
  • pack kernel binary memory spans into one