Skip to content

Commit

Permalink
Update auto_fp8/quantize.py
Browse files Browse the repository at this point in the history
  • Loading branch information
comaniac authored May 23, 2024
1 parent 249902a commit 2c70d7a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion auto_fp8/quantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def fp8_gemm(A, A_scale, B, B_scale, bias, out_dtype):
bias=bias,
)
if need_reshape:
output = output.reshape((batch_size, *output.shape))
output = output.reshape((batch_size, output.shape[0] // batch_size, output.shape[1]))
else:
output = torch.nn.functional.linear(
A.to(out_dtype) * A_scale,
Expand Down

0 comments on commit 2c70d7a

Please sign in to comment.