You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bf16 and fp16 are both 16-bit floating point formats natively supported by tensor core. While fp16 offers higher precision, bf16's better exponent range and compatibility with fp32 make it useful in many contexts. Add implementation support for it.
The text was updated successfully, but these errors were encountered:
bf16 and fp16 are both 16-bit floating point formats natively supported by tensor core. While fp16 offers higher precision, bf16's better exponent range and compatibility with fp32 make it useful in many contexts. Add implementation support for it.
The text was updated successfully, but these errors were encountered: