Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update documentation for adding new ttnn operation #9841

Merged
merged 1 commit into from
Jul 2, 2024

Conversation

arakhmati
Copy link
Contributor

Ticket

Link to Github Issue

Problem description

Documentation for adding a new ttnn operation is out of data

What's changed

Add an example of how to add a new operation and update the documentation using the example

Checklist

  • Post commit CI passes
  • Model regression CI testing passes (if applicable)
  • New/Existing tests provide coverage for changes

@arakhmati arakhmati force-pushed the ttnn-8835-add-documentation branch 2 times, most recently from 0023da1 to 7f41f47 Compare July 2, 2024 17:41
@arakhmati arakhmati changed the title Ttnn 8835 add documentation Update documentation for adding new ttnn operation Jul 2, 2024
@arakhmati arakhmati force-pushed the ttnn-8835-add-documentation branch from 7f41f47 to c03faf6 Compare July 2, 2024 17:48
@arakhmati arakhmati force-pushed the ttnn-8835-add-documentation branch from c03faf6 to c4f8469 Compare July 2, 2024 17:49
dtype,
operation::DEFAULT_OUTPUT_MEMORY_CONFIG,
optional_output_tensor,
activations);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you want to do this on this PR so that the optional_output_tensor is properly handled?

// TODO: add support for optional_output_tensors
            // auto optional_output_tensors = extract_args_to_vector(std::forward<args_t>(args)...,
            // std::optional<ttnn::Tensor>);
            ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do in next PR


A ttnn operation is a function that takes in one or more input tensors and produces one or more output tensors. It is implemented in C++ and can be called from Python.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function but not really :D


`ttnn/cpp/ttnn/operations/<category>/<operation_name>/device/<operation_name>_device_operation.hpp`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little concerned about category part. People often mess up with categorizaton, specially when it is not 100% clear.. I am ok to keep it like that, just want to raise a Q for discussion.


`ttnn/cpp/ttnn/operations/<category>/<operation_name>/device/<operation_name>_device_operation.hpp`
`ttnn/cpp/ttnn/operations/<category>/<operation_name>/device/<operation_name>_device_operation.cpp`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_device_operation is what we called <operation_name>_op.hpp/.cpp before?


from tests.ttnn.utils_for_testing import assert_with_pcc
In order to add a new operation, add the following file:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets add a brief description of this file.
It basically provides a structure and registers the operation.
The cool thing about the structure is that it allows to provide multiple overloads.
The cool thing about registering that it allows to instrument calls to an operation with goodies like "...list...".


.. code-block:: python
A concrete example:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe lets add some notes about consistency? not sure if here or in general.
we want ops to be consistent.
e.g.
kw_only memory_config = None
kw_only queue_id in python and overload in c++
kw_only optional output tensor/s
kw_only dtype
Specific names, specific defaults EXCEPT when not possible and something else is needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do in the next PR

@@ -385,7 +385,13 @@ inline std::string op_meta_data_serialized_json(

j["optional_input_tensors"] = std::vector<json>{};

auto perfModel = operation_t::create_op_performance_model(operation_attributes, tensor_args, tensor_return_value);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to write some more info on "how to perf model?"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@ayerofieiev-tt ayerofieiev-tt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add some more info on:

  • Tests
  • Placing in Experimental
  • Error checking

@arakhmati arakhmati force-pushed the ttnn-8835-add-documentation branch from c4f8469 to 45889a5 Compare July 2, 2024 20:36
@arakhmati arakhmati merged commit 03beea4 into main Jul 2, 2024
5 checks passed
@arakhmati arakhmati deleted the ttnn-8835-add-documentation branch July 2, 2024 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants