Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Apple MPS support #3

Open
SDonkelaarGDD opened this issue Dec 5, 2023 · 9 comments
Open

Add Apple MPS support #3

SDonkelaarGDD opened this issue Dec 5, 2023 · 9 comments

Comments

@SDonkelaarGDD
Copy link

Hey,

In the file autodistill_grounding_dino/helpers.py, I see that it only supports cuda and cpu. Is it possible to add another check for mps? It could look like something in the line of the following:

if torch.cuda.is_available():
    device =  torch.device("cuda")
elif torch.backends.mps.is_available():
    device = torch.device("mps")
else:
    device = torch.device("cpu")

In this way, Apple ARM chips can be leveraged for inference. Let me know if this is desirable. If so, I am happy to make a PR.

@capjamesg
Copy link
Member

Hello there! We'd love this change! Want to file a PR?

@SDonkelaarGDD
Copy link
Author

@capjamesg thanks for the quick reply! I'll draft a PR today 👍

@yeldarby
Copy link

yeldarby commented Dec 5, 2023

Could you also include a benchmark to see if this improves speed? I have had mixed results with MPS (it even degrades performance in some cases depending on how few of the model’s ops it supports).

@SDonkelaarGDD
Copy link
Author

@yeldarby yes, I was just doing that.

I'm running the tests on an Apple M1 Max, 14-inch model from 2021 with 32 GB ram. It seems that CPU performance is quicker compared to MPS.

When I'm training CV models using MPS, I usually notice a significant speedup. But now this is not the case.

@yeldarby
Copy link

yeldarby commented Dec 5, 2023

Yeh GroundingDINO is probably a particularly perverse case given they wrote some of their own CUDA kernels that can’t possibly be supported on MPS out of the box.

@SDonkelaarGDD
Copy link
Author

I agree. In that case, I think it's better to keep using CPU when running inference on ARM MacBooks.

@capjamesg
Copy link
Member

If you are still interested in a PR, adding the results of your investigation to the README would be valuable for people working with this package!

@SDonkelaarGDD
Copy link
Author

@capjamesg sorry for the late reply, work caught up.

I can add some metrics to the README this weekend. I was thinking of using GroundingDINO to extract bounding boxes of numbers using the MNIST dataset. Afterwards, runtimes of mps and cpu can easily be compared.

@capjamesg
Copy link
Member

That would be excellent!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants