Skip to content

Releases: tenstorrent/tt-metal

v0.54.0-rc6

24 Dec 02:03
Compare
Choose a tag to compare
v0.54.0-rc6 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12475462134

📦 Uncategorized

  • Add buffering to DPRINT
  • #13405: TTNN implementation of LENET model
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"
  • #0: Enforce tile layout when using bf4/bf8 data types
  • MeshDevice: Support Quanta Galaxy system file
  • Move Device members from public to private
  • Add unary sharded sweeps
  • #0: Added core_grid offset for sharded layernorm
  • fix abs path bug for sweeps tests code
  • #0: Publish TT-Distributed doc under tech_reports

v0.54.0-rc5

23 Dec 02:03
326f022
Compare
Choose a tag to compare
v0.54.0-rc5 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12459612752

📦 Uncategorized

  • Add buffering to DPRINT
  • #13405: TTNN implementation of LENET model
  • Unvendor nlohmann json
  • #0: Update Llama3 README
  • #0: Minor fix to Llama3 model config for TG
  • #13944: Redesign memory packing API
  • #0: Get rid of run_pre_post_commit_regressions* scripts and split CPP tests as much as we can
  • Create new FD frequent pipeline to isolate unstable pgm benchmark tests
  • Revert "#13405: TTNN implementation of LENET model (#13473)"
  • #0: Dedup code in pytensor using generic lambdas and duck typing
  • #14353: DRAM Read Alignment for Layernorm
  • Afuller/fix clang tidy scan
  • #0: Support arch-specific sfpi releases
  • Enable too-small-loop-variable check
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python
  • update llama 3.1 70b v0 tt-metal and vllm commit refs in docs
  • #15857: Binary Forge Sweep Tests Set2
  • #14976/#15039: Add Support For ceil_mode=True
  • Add missing cache invalidates + loads before stores noc optimization for BH
  • Initial CCL Rewrite Push (Unblocks Parallelization of Efforts and Some TG Llama integration)
  • New FD Init Flow
  • Add support for output sharded embeddings
  • Revert "#5605: Only force-stall ethernet programs on earlier ethernet programs"

v0.54.0-rc4

21 Dec 02:02
Compare
Choose a tag to compare
v0.54.0-rc4 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12440985408

📦 Uncategorized

  • Add buffering to DPRINT
  • #13405: TTNN implementation of LENET model
  • Unvendor nlohmann json
  • Updated install_dependencies.sh to skip installing additional recommended packages and skip prompting for user input for certain package installations
  • #0: Fix conv_transpose2d initting wrong compute_kernel_config variant
  • Fix t3k unit test pipeline
  • Run matmul based Conv2d with input from DRAM
  • Add selu sweep
  • Add TG support to llama3 family
  • Fix Llama rope scaling factor, improve accuracy
  • Let ttnn.reshape support 0 volume tensors
  • #0: Update Llama3 README
  • #0: Minor fix to Llama3 model config for TG
  • #13944: Redesign memory packing API
  • #0: Get rid of run_pre_post_commit_regressions* scripts and split CPP tests as much as we can
  • Create new FD frequent pipeline to isolate unstable pgm benchmark tests
  • Revert "#13405: TTNN implementation of LENET model (#13473)"
  • #0: Dedup code in pytensor using generic lambdas and duck typing
  • #14353: DRAM Read Alignment for Layernorm
  • Afuller/fix clang tidy scan
  • #0: Support arch-specific sfpi releases
  • Enable too-small-loop-variable check
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used
  • #15891: improve full accuracy and fix full bugs
  • Revert "Fix ttnn.from_torch for 0D/1D tensors with tile layout (#15882)"
  • #15857: Skip abs forge for GS
  • #16213: Use our own forked Docker Run Action that points to ECR
  • Add max kernel size for each risc type in an op
  • Infer Conv2dTranspose parameters during model preprocessing
  • #12662: add keepdim fixes to reduce
  • Add chunked prefill to Llama family
  • #15342: Add mirror_kernels option to conv_transpose2d
  • Update CODEOWNERS
  • support reduction for 3d & 4d dims
  • #5605: Only force-stall ethernet programs on earlier ethernet programs
  • Add full support for creating tensors with logical sharding from python

v0.54.0-rc3

20 Dec 02:02
Compare
Choose a tag to compare
v0.54.0-rc3 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12423860625

📦 Uncategorized

  • Add buffering to DPRINT
  • #15836: Update reads, writes, and synchronize ttnn apis to take in sub device ids
  • #13405: TTNN implementation of LENET model
  • Unvendor nlohmann json
  • Updated install_dependencies.sh to skip installing additional recommended packages and skip prompting for user input for certain package installations
  • #0: Fix conv_transpose2d initting wrong compute_kernel_config variant
  • Fix t3k unit test pipeline
  • Run matmul based Conv2d with input from DRAM
  • Add selu sweep
  • Add TG support to llama3 family
  • Fix Llama rope scaling factor, improve accuracy
  • Let ttnn.reshape support 0 volume tensors
  • #0: Update Llama3 README
  • #0: Minor fix to Llama3 model config for TG
  • #13944: Redesign memory packing API
  • #0: Get rid of run_pre_post_commit_regressions* scripts and split CPP tests as much as we can
  • Create new FD frequent pipeline to isolate unstable pgm benchmark tests
  • Revert "#13405: TTNN implementation of LENET model (#13473)"
  • #0: Dedup code in pytensor using generic lambdas and duck typing
  • #14353: DRAM Read Alignment for Layernorm
  • Afuller/fix clang tidy scan
  • #0: Support arch-specific sfpi releases
  • Enable too-small-loop-variable check
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv
  • support missing cases for sweep tests
  • #0: added normalization details in the tech report
  • Fix ttnn.from_torch for 0D/1D tensors with tile layout
  • Port all Moreh OPs to compute_output_specs
  • Bump umd to fix grayskull cluster bug
  • Clean-up the usage of deallocate_activation
  • llm tech report multi device section
  • Add prefill v decode section to LLM tech report [section 3.2]
  • #0: Update eltwise binary to support sharding on arbitrary cores on an arbitrary sub-device grid
  • [LLM tech report] Add accuracy evaluation and debugging sections
  • #16165: Disabling test that depends on some machine state to pass
  • enable dps ops for matmul
  • Isolate tracy
  • [TT-Train ]added tests for sum and mean
  • #16184: Try using ecr to avoid rate limits of docker.io
  • #15221: Post completion messages to dispatch_s
  • [TT-Train] Added softmax backward
  • Optimized FreeList allocator
  • Set the test data to be relative to the test binary
  • #0: Fix matmul doc string
  • #0: remove spammy warning from conftest
  • Update generating unicast go signal commands to ensure dispatch write linear respects alignment
  • LLM tech report sections 2.2, 2.5
  • [TT-Train] Fix tracy deps in the tt-train cmake
  • Updating Allocator docs to explain first fit usage
  • Adding asserts for hanging cases in ND tilize/untilize support
  • Fix ttnn.reallocate when unaligned RM tensors are used

v0.54.0-rc2

19 Dec 02:02
f1ccbb6
Compare
Choose a tag to compare
v0.54.0-rc2 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12404541627

📦 Uncategorized

  • #15836: Update reads, writes, and synchronize ttnn apis to take in sub device ids
  • #13405: TTNN implementation of LENET model
  • Unvendor nlohmann json
  • Updated install_dependencies.sh to skip installing additional recommended packages and skip prompting for user input for certain package installations
  • #0: Fix conv_transpose2d initting wrong compute_kernel_config variant
  • Fix t3k unit test pipeline
  • Run matmul based Conv2d with input from DRAM
  • Add selu sweep
  • Add TG support to llama3 family
  • Fix Llama rope scaling factor, improve accuracy
  • Let ttnn.reshape support 0 volume tensors
  • #0: Update Llama3 README
  • #0: Minor fix to Llama3 model config for TG
  • #13944: Redesign memory packing API
  • #0: Get rid of run_pre_post_commit_regressions* scripts and split CPP tests as much as we can
  • Create new FD frequent pipeline to isolate unstable pgm benchmark tests
  • Revert "#13405: TTNN implementation of LENET model (#13473)"
  • #0: Dedup code in pytensor using generic lambdas and duck typing
  • #14353: DRAM Read Alignment for Layernorm
  • Afuller/fix clang tidy scan
  • #0: Support arch-specific sfpi releases
  • Enable too-small-loop-variable check
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input
  • #0: separate validation of conv weight and bias.
  • #0: Minor refactor of pytensor and tensor implementation files
  • C++ files should not be part of the API of a library
  • #15857: Forge sweep test
  • #15857: Unary forge sweep tests
  • Fix some more namespace pollution caused by using namespace tt::tt_metal
  • #15713 Bad Eltwise Binary ZEROACC
  • #15565 Fix unit test to show sharding ttnn.from_torch problems
  • Fix paged SDPA decode CB sizing issue
  • Reland async dispatch with workaround for hang.
  • #16119: Add forge traces to matmul and reduce sweeps
  • #10034: Binary shift operators
  • #0: Remove incorrect memory span assert
  • Add forge sweeps for slice and transpose
  • #0: Move memory config serialization in the corresponding header away from types.hpp
  • #16114: Allow Binarized Programs to be Reused across WH Devices
  • #0: aligning conv2d transpose as conv

v0.54.0-rc1

17 Dec 23:30
5d0170e
Compare
Choose a tag to compare
v0.54.0-rc1 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12382942255

📦 Uncategorized

  • #15836: Update reads, writes, and synchronize ttnn apis to take in sub device ids
  • #13405: TTNN implementation of LENET model
  • Unvendor nlohmann json
  • Updated install_dependencies.sh to skip installing additional recommended packages and skip prompting for user input for certain package installations
  • #0: Fix conv_transpose2d initting wrong compute_kernel_config variant
  • Fix t3k unit test pipeline
  • Run matmul based Conv2d with input from DRAM
  • Add selu sweep
  • Add TG support to llama3 family
  • Fix Llama rope scaling factor, improve accuracy
  • Let ttnn.reshape support 0 volume tensors
  • #0: Update Llama3 README
  • #0: Minor fix to Llama3 model config for TG
  • #13944: Redesign memory packing API
  • #0: Get rid of run_pre_post_commit_regressions* scripts and split CPP tests as much as we can
  • Create new FD frequent pipeline to isolate unstable pgm benchmark tests
  • Revert "#13405: TTNN implementation of LENET model (#13473)"
  • #0: Dedup code in pytensor using generic lambdas and duck typing
  • #14353: DRAM Read Alignment for Layernorm
  • Afuller/fix clang tidy scan
  • #0: Support arch-specific sfpi releases
  • Enable too-small-loop-variable check
  • Remove built cache of previous git commits.
  • [tt-train] Make tests to open and close device explicitly
  • Update ttcnn.md
  • #0: Add bc to docker container for pgm dispatch math
  • #16012: Revert conv2d changes because of perf regressions, pcc regressions, and increase in runtime
  • Update ttcnn.md
  • Enable noexcept-move-ctor check
  • More updates to ttcnn.md
  • disable workflow telemetry in prepare-metal-run
  • Add support for pretty printing Conv2dConfig
  • [tt-train] TT-train build is broken in main
  • #0: created interleaved to sharded e2e sweep test
  • Add support for padding along width dimension to ttnn.pad
  • Bump umd
  • #0: Prevent slice from padding up a 0 volume tensor
  • #0: support unequal ranked inputs for broadcast in binary_ng
  • #16014: Fix yolo4 e2e perf measurement
  • Update CODEOWNERS - add experimental CCL section
  • #15780: div ops debug
  • Revert "#16012: Revert conv2d changes because of perf regressions, pc…
  • #13127: Make TensorLayout::compute_physical_shard_shape public
  • Link Tensor.reshape to ttnn.reshape
  • #0: Fix merge conflicts originating from #15289
  • Integrate chunked prefill into t3k Llama3-70B
  • Bump MagicEnum to v0.9.7
  • #15944: Fix pybind of create_sub_device_manager_with_fabric to call the correct function.
  • [tt-train] Add option to disable wandb in examples
  • Update perf and latest features for llm models (Dec 16)
  • #16070: Use the same Docker image as built
  • [tt-train] Bump magic_enum from 0.9.6 to 0.9.7
  • Update ttcnn.md
  • #13643: Extend binary-ng math support to match all primitive binary ops.
  • #14530: remove up front padding from generic reduce
  • Revert "#0: Fix merge conflicts originating from #15289"
  • Revert "Link Tensor.reshape to ttnn.reshape"
  • #15061: Implement multi-device tensor distribution APIs in terms of C++ ttnn tensors
  • #0: Allow ttnn.pad to pad Tensor to an odd width in row major
  • #15565 Add unit test to show sharding ttnn.from_torch problems
  • #14977: conv config to use higher cores.
  • Revert "#15565 Add unit test to show sharding ttnn.from_torch problems"
  • [UMD] Removed set_*_params calls and constants
  • #0: Remove some dead code
  • Updated installation script
  • Python -> Python3
  • Add transpose WH sharded, generalize row major permute when N > 4, and do a minor refactor of ttnn::permute
  • Adding ND support for tilize/untilize with padding
  • [Llama3.2-11b vLLM Integration] Add support for paged cross attention, fixes for continuous batching, simplified decode forward call
  • #0: Enable Local Sweeps and Use a Faster Interprocess Queue
  • #15601: Implement support for MeshDevice::reshape(..)
  • Remove setup_core_to_tlb_map
  • #0: Let sharded_to_interleaved handle interleaved input

v0.53.1-rc27

17 Dec 04:40
bbce2c3
Compare
Choose a tag to compare
v0.53.1-rc27 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12366218511

  • no changes

v0.53.1-rc26

17 Dec 02:01
Compare
Choose a tag to compare
v0.53.1-rc26 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12364618362

  • no changes

v0.53.1-rc25

16 Dec 04:32
6d7cc2c
Compare
Choose a tag to compare
v0.53.1-rc25 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12345709214

  • no changes

v0.53.1-rc24

16 Dec 02:01
201eff7
Compare
Choose a tag to compare
v0.53.1-rc24 Pre-release
Pre-release

Note

If you are installing from a release, please refer to the README, INSTALLATION instructions, and any other documentation packaged with the release, not on the main branch. There may be differences between the latest main and the previous release.

The changelog will now follow, showing the changes from last release.

This release was generated by the CI workflow https://github.com/tenstorrent/tt-metal/actions/runs/12344267483

  • no changes