-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime refactor to support runtime stitching #448
Conversation
runtime/lib/types.cpp
Outdated
tensor.deallocate(); | ||
return; | ||
} | ||
#elif defined(TT_RUNTIME_ENABLE_TTMETAL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if i understand correctly we can build for both runtimes, so this should be an #if
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes you're right this should be an #if, will update!
runtime/lib/ttnn/program.cpp
Outdated
@@ -34,6 +34,29 @@ ttnn::Tensor untilize(ttnn::Tensor const &input) { | |||
|
|||
namespace tt::runtime::ttnn { | |||
|
|||
static bool isOnHost(const ::ttnn::Tensor &tensor) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there util methods like these in tt-metal? if there are, we should probably use those...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK in metal they just compare storage type like what we're doing here. Custom API also allows us to catch cases that are not supported yet.
runtime/lib/ttnn/program.cpp
Outdated
output->desc()->layout()->memory_desc(); | ||
::ttnn::Tensor layoutUpdatedOutputTensor = | ||
updateTensorMemoryConfig(device, *outputTensor, outputDesc); | ||
outputs.push_back(toTypeErasedTensor(std::move(layoutUpdatedOutputTensor))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this way, output tensors will be either on the host or on device, depending on the outputDesc
in the flatbuffer?
how will the FE runtime move it to host?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct it could be on host or device depending on outputDesc. If FE wants to move to host, with the current implementation it would need to have a program with output desc host.
If desired I can overload toLayout that accepts some layout descriptors to decouple it from the flatbuffer.
@@ -24,6 +24,14 @@ target_include_directories(TTBinary | |||
) | |||
add_dependencies(TTBinary FBS_GENERATION) | |||
|
|||
add_library(TTRuntimeTypes STATIC types.cpp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add this static library (TTRuntimeTypes
) to be the part of the shared lib used by tt-forge (in lib/SharedLib/CMakeLists.txt
).
#103 Runtime-Stitching
First Iteration/Prototype:
TODOs:
Please let me know what you think, any suggestions are appreciated!