-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Issues: haotian-liu/LLaVA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
"There was a problem with multiple GPU inference in last year's LLaVA 1.6 — any updates?"
#1870
opened Apr 15, 2025 by
fabio1shot
[Question] What is huggingface address for LLaVA-v1.5-LLaMA3-8B model?
#1869
opened Apr 14, 2025 by
qm-intel
[Usage] ImportError: cannot import name 'KeywordsStoppingCriteria' from 'llava.model.utils'
#1861
opened Mar 25, 2025 by
kky677
Proposal: Integrating Sparse Autoencoders (SAEs) for LLaVA Interpretability
#1852
opened Mar 16, 2025 by
jmanhype
[Usage] from .model.language_model.llava_llama import LlavaLlamaForCausalLM is not work
#1849
opened Mar 10, 2025 by
GoodStarLink
[Question] Where can I download "llava_gqa_testdev_balanced.jsonl"
#1847
opened Mar 8, 2025 by
enfantsRichesDeprimes
[Question] The gradient for the additional tunable parameters is None.
#1846
opened Feb 28, 2025 by
LiZhangMing
[Question] Issue with Model Type Mismatch and Download Stuck at 0%
#1842
opened Feb 27, 2025 by
zxdscsfm
[Question] Why the forward.shape different with backward.shape before 21.layer
#1841
opened Feb 26, 2025 by
DengNingyuan
Previous Next
ProTip!
no:milestone will show everything without a milestone.