-
❓ Questions and HelpHello, really thanks for sharing your great work. For the homography estimation case in the provided example, the optimization itself takes 2 [secs] (approximately), even if the number of max iterations is set to 10, which is reasonable in real case. Is there any guide or tip to speed up optimization when training in combination with PyTorch's learning model? Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
Hello, thanks for reaching out. How is "my model pipeline" implemented? Are you using the analytical jacobian as a baseline? |
Beta Was this translation helpful? Give feedback.
-
Hi @zinuok, we are working some on improving aspects of our optimization time, particularly related to automated Jacobian computation (see #268); so, there is definitely room for speeding things up. That being said, I'm also curious about the model pipeline that you referred to in your post. It's difficult to offer more useful suggestions w/o knowing what are the key differences. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your kind reply. The model I mentioned above is just a simple Depth estimation model based on CNN. What I meant to say was that the optimization time of Theseus was relatively large compared to the training epoch time of a typical deep learning model, so it was a concern that it might slow down the overall learning time. |
Beta Was this translation helpful? Give feedback.
-
Hi @zinuok, we have refactored |
Beta Was this translation helpful? Give feedback.
Hi @zinuok, we have refactored
AutoDiffCostFunction
with functorch and there is a significant speedup. More details can be found in #296.