Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make pipelines tests device-agnostic (part1) #9399

Open
wants to merge 50 commits into
base: main
Choose a base branch
from

Conversation

faaany
Copy link
Contributor

@faaany faaany commented Sep 9, 2024

What does this PR do?

Below are some evidences:

PASSED tests/pipelines/animatediff/test_animatediff.py::AnimateDiffPipelineFastTests::test_to_device
PASSED tests/pipelines/amused/test_amused.py::AmusedPipelineFastTests::test_model_cpu_offload_forward_pass
PASSED tests/pipelines/amused/test_amused.py::AmusedPipelineFastTests::test_cpu_offload_
forward_pass_twice
PASSED tests/pipelines/amused/test_amused_inpaint.py::AmusedInpaintPipelineFastTests::te
st_to_device

@yiyixuxu

@faaany faaany marked this pull request as ready for review September 9, 2024 22:55
@sayakpaul
Copy link
Member

Could you provide some details about the machine you used to run the tests changed in this PR?

@faaany
Copy link
Contributor Author

faaany commented Sep 10, 2024

Could you provide some details about the machine you used to run the tests changed in this PR?

yes, I am running on Intel(R) Data Center GPU Max 1550.

@faaany
Copy link
Contributor Author

faaany commented Sep 10, 2024

Could you provide some details about the machine you used to run the tests changed in this PR?

xpu is the common device name for intel gpu.

@faaany
Copy link
Contributor Author

faaany commented Sep 12, 2024

Hi @sayakpaul , any concern on this PR?

@faaany
Copy link
Contributor Author

faaany commented Sep 13, 2024

could you pls help retrigger the CI? Thx a lot!

@faaany
Copy link
Contributor Author

faaany commented Sep 14, 2024

There are 1705 cuda-only UTs. Besides this PR, I have more PRs coming related to device-agnostic tests, e.g. (9401, 9400). Could you pls take a look at this one, so I can proceed with the rest? Thanks a lot!

@faaany faaany changed the title make cuda-only tests device-agnostic make pipelines tests device-agnostic Sep 14, 2024
@faaany faaany changed the title make pipelines tests device-agnostic make pipelines tests device-agnostic (part1) Sep 14, 2024
@faaany
Copy link
Contributor Author

faaany commented Sep 18, 2024

Hi @yiyixuxu, could you let me know your thoughts on this PR?

@faaany
Copy link
Contributor Author

faaany commented Sep 20, 2024

Hi folks, 2 weeks already, any feedback? I am waiting to enable more cases on XPU... If this is not the right approach, pls let me know as well. Thanks a lot!

@sayakpaul @yiyixuxu

@sayakpaul
Copy link
Member

Please be a little more respectful towards the maintainers' time. It will be reviewed soon.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@faaany
Copy link
Contributor Author

faaany commented Sep 20, 2024

Please be a little more respectful towards the maintainers' time. It will be reviewed soon.

oh, if my expression sounds impolite, sorry for that. And thanks for letting me know!

No hurries, I am just a bit worried that I might not be on the right track. Thanks for your understanding.

@yao-matrix
Copy link

Please be a little more respectful towards the maintainers' time. It will be reviewed soon.

@sayakpaul , could you pls take a review on this? This PR and following PRs are part of efforts to integrate intel gpu into hugging face ecosystems and making CIs can run as expected, thx.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments!

@@ -226,7 +227,6 @@ def test_save_load_float16(self):
max_diff = np.abs(output - output_loaded).max()
self.assertLess(max_diff, 2e-2, "The output of the fp16 pipeline changed after saving and loading.")

@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be updated, instead of removing, no?

@@ -247,8 +247,8 @@ def test_float16_inference(self):
self.assertLess(max_diff, 1.3e-2, "The outputs of the fp16 and fp32 pipelines are too different.")

@unittest.skipIf(
torch_device != "cuda" or not is_accelerate_available() or is_accelerate_version("<", "0.14.0"),
reason="CPU offload is only available with CUDA and `accelerate v0.14.0` or higher",
not is_accelerate_available() or is_accelerate_version("<", "0.14.0"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should still skip for cpu, no?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants