up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
-
Updated
Feb 22, 2025
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
PAINT (Paying Attention to INformed Tokens) is a plug-and-play framework that intervenes in the self-attention of the LLM and selectively boost the visual attention informed tokens to mitigate hallucination of Vision Language Models
Fully automated LLM evaluator
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
Detecting Hallucinations in LLMs
Add a description, image, and links to the hallucination-mitigation topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-mitigation topic, visit your repo's landing page and select "manage topics."