-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 [Bug] compile methods kwarg_inputs does not accept key values of "Any" type, despite documentation stating so #3377
Comments
Please make sure to include everything the model takes in the forward function in the kwarg_inputs. If a kwarg input is a tensor, you give it a tensor of the same shape. If the kwarg has a bool, you can try You can also try MutableTorchTensorRTModule. That handles the sample input for you and supports extra functionalities such as weight updates.
This includes an example of compiling the unet of stable diffusion pipeline. |
@cehongwang thank you for guiding me to MutableTorchTensorRTModule. I will probably need to check that out. But apart from that, I am really sorry. I do not understand how your comment has anything to do with the issue I opened. I did not pass torch.tensor(True) as kwarg_inputs but a boolean value True which raises an exception except it should not as stated by the doc. I gave a minimal reproducible example which is the topic of this issue and which raises an exception. It would be really nice to stick to this topic here. |
Use
|
@cehongwang The bug is that it does not work with a python boolean as kwarg_inputs. It should according to the docs. Changing the reproducible example which causes the bug is not the solution to fixing the bug :D Fixing the code is. |
Sure. We will fix the documentation in our next release. You can use this to bypass the error for now :) |
The problem is not a documentation issue. For models where non tensor inputs are constants I can workaround the issue by generating a wrapper class which passes the python typed inputs, sth like this:
where self.forward_kwargs are the python typed constants and self.model is the original model I want to compile.
and try to compile this, I get the following error:
I can give you a reproducible example for this as well if you need that. I also had a quick look at MutableTorchTensorRTModule. If this would work (havent tested yet), it seems still not a solution as the docs currently state that saving and loading the compiled models with it is not possible for the python version of tensorrt. I could use regular torch.compile if I dont want to save the model. Also MutableTorchTensorRTModule is a compile at first use compilation and not ahead of time which is not a very nice api. And also ... I am sure not all python types can be wrapped by tensors ;-) |
Currently, we have limited support to some python-typed objects due to the inflexibility of Ahead-of-Time compilation. Please give us a reproducible example and we can look into that. |
Bug Description
Passing a boolean value inside a dict to
kwarg_inputs
parameter of the torch_tensorrt.compile method results inIt seems that apart from collection types (list, tuple, dict), at leaf level only torch.Tensor values are allowed. This contradicts the documentation https://pytorch.org/TensorRT/py_api/torch_tensorrt.html?highlight=compile which states:
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The minimal example should compile fine. Any values in addition to torch tensors in both - inputs and kwarg_inputs - should IMHO be accepted. It would additionally be nice if the documentation would be a bit more verbose about this IMHO important topic of how inputs will be treated by the compiler and what will happen at runtime of the compiled model.
Environment
I am sorry, I do not know a canonical way of "turning on debug messages" in python. I do not know how this translates into something actionable.
conda
,pip
,libtorch
, source): pipThe text was updated successfully, but these errors were encountered: