[논문리뷰] NoLan: Mitigating Object Hallucinations in Large Vision-Language Models via Dynamic Suppression of Language PriorsXinchao Wang이 arXiv에 게시한 'NoLan: Mitigating Object Hallucinations in Large Vision-Language Models via Dynamic Suppression of Language Priors' 논문에 대한 자세한 리뷰입니다.#Review#Large Vision-Language Models (LVLMs)#Object Hallucinations#Language Priors#Contrastive Decoding#Dynamic Suppression#Training-Free#Multimodal AI2026년 2월 25일댓글 수 로딩 중
[논문리뷰] MAD: Modality-Adaptive Decoding for Mitigating Cross-Modal Hallucinations in Multimodal Large Language ModelsYong Man Ro이 arXiv에 게시한 'MAD: Modality-Adaptive Decoding for Mitigating Cross-Modal Hallucinations in Multimodal Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLM#Cross-modal Hallucination#Contrastive Decoding#Modality-Adaptive Decoding#Self-Assessment#Audio-Visual Language Model#Training-Free2026년 1월 29일댓글 수 로딩 중
[논문리뷰] CCD: Mitigating Hallucinations in Radiology MLLMs via Clinical Contrastive DecodingarXiv에 게시된 'CCD: Mitigating Hallucinations in Radiology MLLMs via Clinical Contrastive Decoding' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Radiology Report Generation (RRG)#Medical Hallucinations#Contrastive Decoding#Training-free Inference#Clinical AI#Visual Question Answering (VQA)2025년 10월 8일댓글 수 로딩 중