Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete convd_host_weights and update all tests using conv2d #16264

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

shwetankTT
Copy link
Contributor

@shwetankTT shwetankTT commented Dec 23, 2024

Ticket

#14179

Problem description

Prepare conv2d_weight and bias in various models.

What's changed

Using Prepare_conv2d_weight and bias fuction in conv2d api.

Device perf regressions
Nightly model and ttnn tests
Model perf tests

Checklist

  • Post commit CI passes
  • Blackhole Post commit (if applicable)
  • Model regression CI testing passes (if applicable)
  • Device performance regression CI testing passes (if applicable)
  • (For models and ops writers) Full new models tests passes
  • New/Existing tests provide coverage for changes

@shwetankTT shwetankTT force-pushed the shwetankTT/conv_weight branch from 4b201bc to cadb3dd Compare December 23, 2024 09:15
@shwetankTT shwetankTT changed the title Shwetank tt/conv weight Delete convd_host_weights and update all tests using conv2d #14179 Dec 23, 2024
@shwetankTT shwetankTT changed the title Delete convd_host_weights and update all tests using conv2d #14179 Delete convd_host_weights and update all tests using conv2d Dec 23, 2024
@shwetankTT shwetankTT marked this pull request as ready for review December 23, 2024 15:39
"batch_size": batch_size,
"input_height": input_tensor.shape[1],
"input_width": input_tensor.shape[2],
"kernel_size": (3, 3),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like most args are redundant between prepare_conv_weights and conv2d
I don't this we should hardcode params twice like kernel size (3, 3), all of these should be defined in one place.

It's weird that for prepare_conv_weights we use this kwargs dict, and on conv2d we use named argumets, I would align those.

Also, there is not need to define this conv_kwargs in case it's not going to be used, it should be under if where ttnn.prepare_conv_weights is.

This comment applies to all models changed.

"conv_config": conv_config,
}

if True or not ttnn.is_tensor_storage_on_device(tt_weight):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should "True or" part be removed?

**conv_kwargs,
)
self.bias = (
ttnn.prepare_conv_bias(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prepare weights should be performed in init part of the model not in inference part

Copy link
Contributor

@pavlejosipovic pavlejosipovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem like all conv models are covered here?

}

if not ttnn.is_tensor_storage_on_device(self.weight):
breakpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants