-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tt-forge sweep for conv2d. #16178
Conversation
tests/sweep_framework/sweeps/conv2d/short/conv2d_short_sweep.py
Outdated
Show resolved
Hide resolved
@@ -1583,6 +1583,372 @@ | |||
], | |||
"is_conv1d": [True], | |||
}, | |||
"tt-forge_sweep_conv2d": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add one pytest entry to run these locally, similar to what we have for PyTorch traces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what you mean. could you please elaborate more?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thing like
def test_conv2d_localrun(device, input_spec):
that we have for existing sweeps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you resolve this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is already there in latest push.
"input_specs": [ | ||
# Contains following params | ||
# [batch_size, output_channels, input_channels, input_height, input_width, kernel_height, kernel_width, stride_x, stride_y, pad_x, pad_y, groups, bias, dilation, datatype] | ||
[1, 1024, 1024, 14, 14, 3, 3, 1, 1, 1, 1, 1, False, 1, int(ttnn.bfloat16)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we seem to be missing info about whether tensors are tilted or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tensor tilted?? I did not see such info. in sheet. I mainly have derived above from attributes only. Let me know if I am missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"input_layouts": [
{
"mapping_from": "(d0, d1, d2, d3)",
"mapping_to": "(d0 * 196 + d1 * 14 + d2, d3)",
"memory_config": [
7,
32,
"tile<32x32, bf16>",
"dram"
]
},
{
"mapping_from": "(d0, d1, d2, d3)",
"mapping_to": "(d0 * 3072 + d1 * 3 + d2, d3)",
"memory_config": [
3145728,
3,
"bf16",
"system_memory"
]
},
{
"mapping_from": "(d0, d1, d2, d3)",
"mapping_to": "(d0 * 196 + d1 * 14 + d2, d3)",
"memory_config": [
7,
32,
"tile<32x32, bf16>",
"dram"
]
}
In data you have this type of info from this you can infer if tensor is tilized or not if it's on device or on host
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahhh....it is tiled.
Since every test case was tiled in json, I thought it is ok to ignore. However, I just checked our tensors are row major, by default.
will correct it and also add one more field of memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all inputs are flagged as tiled, please do not ignore as this is important. In general, look at the memory config section, it will be explicitly marked as tiled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Several point to note here.
-> Weight tensors are missing layout info. I have confired with @AleksKnezevic and @LPanosTT and they are not giving any specific layout while creating weight tensor. same is for our case.
-> @AleksKnezevic confirmed that these test cases do not have any bias tensors.
-> input and output tensors are of tiled layout. modified code to do same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we seem to be missing info if tensor is on host or on device?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"system_memory" means host tensor for them as far as I understand
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-> As of now, in forge input and output buffers are in DRAM. I have added buffer type as DRAM or L1 for them.
-> For weight tensor, They are using default setting which is being used now for us also.(weight buffer's buffer type is ignored.). If I understand correctly, if we do not specify any memory_config, it should be of host only. Do let me know if I am missing something.
I also have asked @AleksKnezevic and @LPanosTT for steps(, how they are running) to run test case. Once I have steps, I shall try to run one test case with debug logs enabled and see If I am missing anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, for now it seems that all the weights are on host, but that will probably change. It's fine to keep as is for now
592ac8a
to
957e890
Compare
New traces are huge (8k lines) maybe we should put them in a separate file |
957e890
to
fd041a2
Compare
Done, |
fd041a2
to
7e44e74
Compare
Signed-off-by: Nilaykumar Patel <[email protected]>
Signed-off-by: Nilaykumar Patel <[email protected]>
Signed-off-by: Nilaykumar Patel <[email protected]>
Signed-off-by: Nilaykumar Patel <[email protected]>
93bacbb
to
e1f8727
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a big change, just test the generation on these to check for any mistakes. Thanks, approved.
Ticket
#16177
Problem description
tt-forge test cases are missing in conv2d sweep suite.
What's changed
Add sweep suite for tt-forge
Checklist