Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#7535 Update FloatTensor type hints to Tensor #7883

Merged
merged 7 commits into from
May 10, 2024

Conversation

vanakema
Copy link
Contributor

@vanakema vanakema commented May 7, 2024

What does this PR do?

Updates all references to torch.FloatTensor to torch.Tensor. This is because FloatTensor is essentially deprecated and all Tensor types are actually just Tensors now. FloatTensor was kept to be backwards compatible as noted here.

This allows type hints to not yell when a user uses the canonical Tensor type

Fixes #7535

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@vanakema
Copy link
Contributor Author

vanakema commented May 7, 2024

@DN6 Can you approve this for a pipeline run? Due to the number of files this touches, I was trying to run the full suite on my laptop but it was taking over an hour, and I see it takes way less than an hr (around 20m) for people's tests to run in the CI pipeline.

@yiyixuxu yiyixuxu requested a review from stevhliu May 8, 2024 00:02
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented May 8, 2024

cc @standardAI here :)

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you so much for doing this!!

@yiyixuxu
Copy link
Collaborator

yiyixuxu commented May 8, 2024

@vanakema
Copy link
Contributor Author

vanakema commented May 8, 2024

Yep! On it. Thanks for the review!

@vanakema
Copy link
Contributor Author

vanakema commented May 8, 2024

Do you think it makes sense to update the examples too? Just realized I missed those

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for updating! 👏

What examples are you referring to?

@vanakema
Copy link
Contributor Author

vanakema commented May 9, 2024

I really meant all of them that reference or use FloatTensor. I decided to update them.

I'm running the tests on my server right now instead of my laptop, but they're still going pretty slow. Do one of you mind approving the workflow again?

@vanakema
Copy link
Contributor Author

vanakema commented May 9, 2024

How long should the full suite of tests take to run? I started a full run last night on my home server and as of this morning, they're still running. I'm curious to hear what's the typical dev flow for situations like this for those of you at HF. It seems super weird for it to take this long. Maybe the 32 worker threads it spun up was too many and my box is getting bottlenecked somewhere? Would it run faster if I passed my GPU to it? On my Mac, the pipelines never were this slow (but they were only running 8 threads since there were only 8 "cores"), but on my much more powerful server it is typical to see numbers like "159s/it".

@vanakema vanakema force-pushed the 7535-FloatTensor-to-Tensor branch from 01491e3 to 961ce4b Compare May 9, 2024 19:45
@vanakema
Copy link
Contributor Author

vanakema commented May 10, 2024

An update on the test that failed: I believe the one that caused the failed pipeline was a fluke as it passes consistently on my box. I think it just needs a retry.

Looking forward at other tests that have yet to run in the pipeline, I am consistently having issues with one of the determinism tests: tests/models/unets/test_models_unet_spatiotemporal.py::UNetSpatioTemporalConditionModelTests::test_set_attn_processor_for_determinism

I see that elsewhere the precision required for the determinism test was reduced to 10^2 down from 10^3. When I apply this to the base_precision in the ModelTesterMixin in test_modeling_common.py, the determinism test that was regularly failing on my box now passes. I am curious to hear y'all's thoughts on this, and why you think this might be happening. Every (almost every) change in this PR is only changes to the type hints and documentation. I trawled through nearly every file's changes (I did skip a batch of the pipeline and schedulers, and all the deprecated schedulers), and I could not find any instance of actual object instantiation that would affect this. It's strange though because I don't understand how typehints would change the behavior of the determinism test.

Admittedly determinsim is definitely something that's a little out of my wheelhouse, so it's totally possible I missed something, and I would love to hear any ideas y'all may have. In the meantime, I'll continue investigating tomorrow in my free time. I'm totally down to hop on a call to talk about it if any of you at HF are open to it! But I know you guys are probably really busy, so I also completely understand if that's not something of interest. I'll hop on the Discord though

Thanks in advance!

@tolgacangoz
Copy link
Contributor

tolgacangoz commented May 10, 2024

Thanks for updating type hints Mark!
The failed pipeline test can be passed on my local PC too. test_set_attn_processor_for_determinism can also be passed on my local PC. This is my setup:

❯ diffusers-cli env
An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.

Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.

- 🤗 Diffusers version: 0.28.0.dev0
- Platform: Ubuntu 24.04 LTS - Linux-6.8.0-31-generic-x86_64-with-glibc2.39
- Running on a notebook?: No
- Running on Google Colab?: No
- Python version: 3.10.14
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.3 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.26
- Huggingface_hub version: 0.23.0
- Transformers version: 4.40.2
- Accelerate version: 0.30.0
- PEFT version: 0.10.0
- Bitsandbytes version: 0.43.1
- Safetensors version: 0.4.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce GTX 1650, 4096 MiB VRAM
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>

And, I used this command:

python -m pytest -n auto --dist=loadfile -s -v examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image

@vanakema vanakema force-pushed the 7535-FloatTensor-to-Tensor branch from cb7dd2b to 78500e8 Compare May 10, 2024 16:28
@vanakema
Copy link
Contributor Author

Oh awesome, thanks for testing that out Tolga! Ok maybe it just needs a retry then? I'll rebase on main and then can @yiyixuxu or @stevhliu approve the pipeline run?

@vanakema
Copy link
Contributor Author

Pipeline passed, woohoo, let's go! Does someone with permissions mind merging?

@vanakema vanakema changed the title [WIP] #7535 Update FloatTensor type hints to Tensor #7535 Update FloatTensor type hints to Tensor May 10, 2024
@yiyixuxu yiyixuxu merged commit be4afa0 into huggingface:main May 10, 2024
15 checks passed
@yiyixuxu
Copy link
Collaborator

thank you!
thank @standardAI for your help too :)

@vanakema
Copy link
Contributor Author

Amazing, thank you all! Super stoked to have my first contribution merged in :)

@vanakema vanakema deleted the 7535-FloatTensor-to-Tensor branch May 10, 2024 20:07
XSE42 added a commit to XSE42/diffusers3d that referenced this pull request May 13, 2024
diffusers commit be4afa0
    #7535 Update FloatTensor type hints to Tensor huggingface/diffusers#7883
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

FloatTensor type hint is incompatible with Tensor
5 participants