[논문리뷰] Causal-JEPA: Learning World Models through Object-Level Latent InterventionsarXiv에 게시된 'Causal-JEPA: Learning World Models through Object-Level Latent Interventions' 논문에 대한 자세한 리뷰입니다.#Review#World Models#Object-Centric Representations#Latent Interventions#Masked Prediction#Causal Inductive Bias#Joint Embedding Predictive Architecture (JEPA)#Visual Question Answering (VQA)#Model Predictive Control (MPC)2026년 2월 17일댓글 수 로딩 중
[논문리뷰] Toward Ambulatory Vision: Learning Visually-Grounded Active View SelectionarXiv에 게시된 'Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection' 논문에 대한 자세한 리뷰입니다.#Review#Active Perception#Vision-Language Models (VLMs)#Embodied AI#View Selection#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Visual Question Answering (VQA)#3D Environments2025년 12월 15일댓글 수 로딩 중
[논문리뷰] IF-Bench: Benchmarking and Enhancing MLLMs for Infrared Images with Generative Visual PromptingarXiv에 게시된 'IF-Bench: Benchmarking and Enhancing MLLMs for Infrared Images with Generative Visual Prompting' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Infrared Image Understanding#Benchmark Dataset#Visual Question Answering (VQA)#Generative Visual Prompting (GenViP)#Domain Adaptation#Image-to-Image Translation2025년 12월 10일댓글 수 로딩 중
[논문리뷰] VQ-VA World: Towards High-Quality Visual Question-Visual AnsweringFeng Li이 arXiv에 게시한 'VQ-VA World: Towards High-Quality Visual Question-Visual Answering' 논문에 대한 자세한 리뷰입니다.#Review#Visual Question Answering (VQA)#Image Generation#Data-centric AI#Agentic Pipeline#Multimodal Models#Web-scale Data#Benchmark#LightFusion2025년 11월 25일댓글 수 로딩 중
[논문리뷰] Draft and Refine with Visual ExpertsarXiv에 게시된 'Draft and Refine with Visual Experts' 논문에 대한 자세한 리뷰입니다.#Review#Large Vision-Language Models (LVLMs)#Visual Grounding#Hallucination Mitigation#Agent Framework#Visual Question Answering (VQA)#Expert Coordination#Relevance Map#Multi-modal Reasoning2025년 11월 20일댓글 수 로딩 중
[논문리뷰] PhysToolBench: Benchmarking Physical Tool Understanding for MLLMsXu Zheng이 arXiv에 게시한 'PhysToolBench: Benchmarking Physical Tool Understanding for MLLMs' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Physical Tool Understanding#Benchmarking#Embodied AI#Visual Question Answering (VQA)#Tool Affordances#Reasoning2025년 10월 13일댓글 수 로딩 중
[논문리뷰] TTRV: Test-Time Reinforcement Learning for Vision Language ModelsSerena Yeung-Levy이 arXiv에 게시한 'TTRV: Test-Time Reinforcement Learning for Vision Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Test-Time Adaptation#Unsupervised Learning#Image Recognition#Visual Question Answering (VQA)#Group Relative Policy Optimization (GRPO)#Entropy Regularization2025년 10월 9일댓글 수 로딩 중
[논문리뷰] EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging BenchmarkTianwen Qian이 arXiv에 게시한 'EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark' 논문에 대한 자세한 리뷰입니다.#Review#Egocentric Vision#Nighttime Conditions#Visual Question Answering (VQA)#Day-Night Alignment#Multimodal Large Language Models (MLLMs)#Depth Estimation#Correspondence Retrieval#Benchmark2025년 10월 8일댓글 수 로딩 중
[논문리뷰] CCD: Mitigating Hallucinations in Radiology MLLMs via Clinical Contrastive DecodingarXiv에 게시된 'CCD: Mitigating Hallucinations in Radiology MLLMs via Clinical Contrastive Decoding' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Radiology Report Generation (RRG)#Medical Hallucinations#Contrastive Decoding#Training-free Inference#Clinical AI#Visual Question Answering (VQA)2025년 10월 8일댓글 수 로딩 중
[논문리뷰] NuRisk: A Visual Question Answering Dataset for Agent-Level Risk Assessment in Autonomous DrivingarXiv에 게시된 'NuRisk: A Visual Question Answering Dataset for Agent-Level Risk Assessment in Autonomous Driving' 논문에 대한 자세한 리뷰입니다.#Review#Visual Question Answering (VQA)#Autonomous Driving#Risk Assessment#Spatio-Temporal Reasoning#Large Vision Models (VLMs)#Dataset#Bird-Eye-View (BEV)#Fine-tuning2025년 10월 6일댓글 수 로딩 중