-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make pipelines
tests device-agnostic (part1)
#9399
base: main
Are you sure you want to change the base?
Conversation
Could you provide some details about the machine you used to run the tests changed in this PR? |
yes, I am running on Intel(R) Data Center GPU Max 1550. |
xpu is the common device name for intel gpu. |
Hi @sayakpaul , any concern on this PR? |
could you pls help retrigger the CI? Thx a lot! |
pipelines
tests device-agnostic
pipelines
tests device-agnostic pipelines
tests device-agnostic (part1)
Hi @yiyixuxu, could you let me know your thoughts on this PR? |
Hi folks, 2 weeks already, any feedback? I am waiting to enable more cases on XPU... If this is not the right approach, pls let me know as well. Thanks a lot! |
Please be a little more respectful towards the maintainers' time. It will be reviewed soon. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
oh, if my expression sounds impolite, sorry for that. And thanks for letting me know! No hurries, I am just a bit worried that I might not be on the right track. Thanks for your understanding. |
@sayakpaul , could you pls take a review on this? This PR and following PRs are part of efforts to integrate intel gpu into hugging face ecosystems and making CIs can run as expected, thx. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments!
@@ -226,7 +227,6 @@ def test_save_load_float16(self): | |||
max_diff = np.abs(output - output_loaded).max() | |||
self.assertLess(max_diff, 2e-2, "The output of the fp16 pipeline changed after saving and loading.") | |||
|
|||
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be updated, instead of removing, no?
@@ -247,8 +247,8 @@ def test_float16_inference(self): | |||
self.assertLess(max_diff, 1.3e-2, "The outputs of the fp16 and fp32 pipelines are too different.") | |||
|
|||
@unittest.skipIf( | |||
torch_device != "cuda" or not is_accelerate_available() or is_accelerate_version("<", "0.14.0"), | |||
reason="CPU offload is only available with CUDA and `accelerate v0.14.0` or higher", | |||
not is_accelerate_available() or is_accelerate_version("<", "0.14.0"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should still skip for cpu, no?
What does this PR do?
Below are some evidences:
@yiyixuxu