-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Superset model benchmarking tools to use Pydantic classes and save one json #16790
Conversation
…d save one json WIP Signed-off-by: Salar Hosseini <[email protected]>
…l json and add data uploading for t3k llama tests Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Produce data for external analysis test: https://github.com/tenstorrent/tt-metal/actions/runs/12818671201 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from the llama3 demo side.
I'll be updating my local changes to use the new pydantic as well 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose a decently big change in benchmarking_utils.py
Rest of the code is ok
…ading them Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Signed-off-by: Salar Hosseini <[email protected]>
Ticket
#15435
Problem description
What's changed
Produce data for external analysis
workflow to test completing a partial benchmark jsonChecklist