Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for bf16. #19

Open
lcy-seso opened this issue Dec 17, 2024 · 0 comments
Open

Add support for bf16. #19

lcy-seso opened this issue Dec 17, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@lcy-seso
Copy link
Contributor

bf16 and fp16 are both 16-bit floating point formats natively supported by tensor core. While fp16 offers higher precision, bf16's better exponent range and compatibility with fp32 make it useful in many contexts. Add implementation support for it.

@haruhi55 haruhi55 added the enhancement New feature or request label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants