Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Resnet50 model for Blackhole, with Batch = 32 #17985

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

mywoodstock
Copy link
Contributor

@mywoodstock mywoodstock commented Feb 19, 2025

Ticket

#17393
#18341

Problem description

This PR enables larger batch sizes 20 and 32 for Resnet50 on Blackhole.

What's changed

Some updates to the model itself to allow batch 32.
More updates to the fold op to allow non-rectangular grids.

Checklist

@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch 2 times, most recently from 72712db to 90fb639 Compare February 26, 2025 23:37
@mywoodstock mywoodstock changed the title [DO NOT METGE] Asarje/rn50 bh largebatch 20250218 Updated Resnet50 model for Blackhole, with Batch = 32 Feb 26, 2025
@mywoodstock mywoodstock marked this pull request as ready for review February 26, 2025 23:45
@@ -202,7 +202,7 @@ void MAIN {
#ifdef ARCH_BLACKHOLE
// FIXME: This is a temporary workaround to avoid hangs on blackhole.
// https://github.com/tenstorrent/tt-metal/issues/16439
for (uint32_t i = 0; i < 10; i++) {
for (uint32_t i = 0; i < 100; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to add an order of magnitude to the delay?

Someone said this was a 30% slowdown that is now a 300% slowdown.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, this snuck in --> didn't mean to push this in

@@ -914,6 +966,19 @@ def run(self, input_tensor, device, ops_parallel_config, conv_op_cache={}) -> tt
),
}
)
# ## 128
# core_range_set = ttnn.CoreRangeSet(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove commented out code here and in other parts of the PR.

and layer_module
and (layer_module == "layer1_module2" or layer_module == "layer1_module3")
):
conv_kwargs_2["conv_config"].act_block_h_override = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point it may be better to have a function go through all three cases, and then just do something like

if f(batch_size, layer_module):
    conv_kwargs_2["conv_config"].act_block_h_override = 0

@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch from af8258a to cad363d Compare February 27, 2025 17:19
@mywoodstock mywoodstock force-pushed the asarje/rn50-bh-largebatch-20250218 branch from 0648d6a to 752a333 Compare February 27, 2025 17:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants