Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions: anchors, model's forward and code ETA #13

Open
Jay-D13 opened this issue Nov 26, 2024 · 0 comments
Open

Questions: anchors, model's forward and code ETA #13

Jay-D13 opened this issue Nov 26, 2024 · 0 comments

Comments

@Jay-D13
Copy link

Jay-D13 commented Nov 26, 2024

Thank you for your previous response regarding the VLM. It was very helpful in clarifying certain aspects. I have a few additional questions, and I hope you don’t mind providing further insights:

  1. Could you elaborate on the reason for selecting only the last anchor during inference (instance_feature[:, 900:, :])? Is it purely to serve as the global condition for the diffusion process, or is there another purpose?

  2. In the provided implementation, some attention mechanisms and feature refinement steps seem to be commented out. Is this a simplification for testing, or does it reflect the final design in the paper? If it’s a simplification, could you share the rationale behind excluding attention in this version?

  3. If it’s not too much trouble, could you share which conference the paper was submitted to? This would help anticipate the potential release date for the full code, as mentioned in your other communications

Thank you again for your time and for sharing your amazing work. Greatly appreciate your support!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant