Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip test when atomic operations are not supported on GPU. #7117

Merged
merged 8 commits into from
Feb 21, 2024

Conversation

drivanov
Copy link
Contributor

Description

Following tests

FAILED tests/python/common/ops/test_ops.py::test_gather_mm_idx_b[dtype0-0.01-256] - RuntimeError: CUDA error: unspecified launch failure
FAILED tests/python/common/ops/test_ops.py::test_gather_mm_idx_b[dtype0-0.01-64] - RuntimeError: CUDA error: unspecified launch failure
FAILED tests/python/common/ops/test_ops.py::test_gather_mm_idx_b[dtype0-0.01-16] - RuntimeError: CUDA error: unspecified launch failure
FAILED tests/python/common/ops/test_ops.py::test_gather_mm_idx_b[dtype0-0.01-8] - RuntimeError: CUDA error: unspecified launch failure
FAILED tests/python/common/ops/test_ops.py::test_gather_mm_idx_b[dtype0-0.01-1] - RuntimeError: CUDA error: unspecified launch failure

are failing for dtype=torch.bfloat16 with the error message:

Atomic operations are not supported for bfloat16 (BF16) on GPUs with compute capability less than 8.0.

Checklist

Please feel free to remove inapplicable items for your PR.

  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 13, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 13, 2024

Commit ID: b174ba0

Build ID: 1

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 13, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 13, 2024

Commit ID: 9f29957

Build ID: 2

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@chang-l
Copy link
Collaborator

chang-l commented Feb 13, 2024

Btw, let me provide a bit more background here:

Recently, PyTorch introduces this commit: pytorch/pytorch@fc5fda1, which changes the behavior of torch.cuda.is_bf16_supported(). Previously, torch.cuda.is_bf16_supported() will be False for any GPUs older than A100 (<sm80). But after this commit, it could return True even with V100 GPUs (sm70). As a result, it can bypass the check here (and many other places in the tests) even with a V100 (sm70) GPU that does NOT support bf16 operations:

if (
F._default_context_str == "gpu"
and dtype == torch.bfloat16
and not torch.cuda.is_bf16_supported()
):
pytest.skip("BF16 is not supported.")

Sadly, even if we know that A100 is the first gen GPU that supports bf16 operations [1] [2], we could not use this API(torch.cuda.is_bf16_supported()) to query for the bf16 operation support any more. cc. @TristonC @nv-dlasalle @frozenbugs

As of now, this commit has not been populated to any pytorch release branch yet. So, it can only be reproduced from a nightly PyTorch build.

@nv-dlasalle
Copy link
Collaborator

@chang-l to clarify the issue, because DGL has some operations only implemented via atomics, we cannot support these GPUs, but pytorch has non-atomic versions of its operators and thus can implement operations for BF16 on these GPUs?

@chang-l
Copy link
Collaborator

chang-l commented Feb 13, 2024

@nv-dlasalle
After checking cuda_bf16.h (12.3) and the doc, it seems starting from cuda12.2, an 'emulated' support of many bf16 operations have been added to older devices (sm70), which includes atomicAdd operation. I think, technically, all sm70-sm80 devices should support bf16 operations via 'emulation path' from driver side (with cuda 12.2+).

Then, do we need to update https://github.com/dmlc/dgl/blob/master/src/array/cuda/atomic.cuh to update the assertions?
cc. @yaox12

@frozenbugs
Copy link
Collaborator

@dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 14, 2024

Commit ID: 9f29957

Build ID: 3

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 14, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 14, 2024

Commit ID: e9c74fa

Build ID: 4

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@Rhett-Ying
Copy link
Collaborator

@dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 19, 2024

Commit ID: 1fb44b6bab8bbc5e8b3d8d2a5b8ec64b2eaa5843

Build ID: 5

Status: ❌ CI test failed in Stage [C++ CPU (Win64)].

Report path: link

Full logs path: link

@chang-l
Copy link
Collaborator

chang-l commented Feb 19, 2024

I am okay with the current fix in this PR.

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Commit ID: 235afc1658ea2db6b5bad4614398a3f5609c5369

Build ID: 6

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Commit ID: 0135669662777e978af0966ae9453cc68cd0d950

Build ID: 7

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Commit ID: 939cd027e137aa0d952b00ef549611e92221f422

Build ID: 8

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Not authorized to trigger CI. Please ask core developer to help trigger via issuing comment:

  • @dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Commit ID: 8f569ec

Build ID: 9

Status: ❌ CI test failed in Stage [Authentication].

Report path: link

Full logs path: link

@chang-l
Copy link
Collaborator

chang-l commented Feb 20, 2024

@dgl-bot

@dgl-bot
Copy link
Collaborator

dgl-bot commented Feb 20, 2024

Commit ID: 8f569ec

Build ID: 10

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@frozenbugs frozenbugs merged commit 364cb71 into dmlc:master Feb 21, 2024
2 checks passed
@drivanov drivanov deleted the test_ops branch February 21, 2024 17:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants