[논문리뷰] TerraScope: Pixel-Grounded Visual Reasoning for Earth ObservationBegüm Demir이 arXiv에 게시한 'TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Earth Observation (EO)#Pixel-Grounded Reasoning#Chain-of-Thought (CoT)#Multi-Modal Reasoning#Multi-Temporal Reasoning#Geospatial Reasoning2026년 3월 22일댓글 수 로딩 중
[논문리뷰] HomeSafe-Bench: Evaluating Vision-Language Models on Unsafe Action Detection for Embodied Agents in Household ScenariosarXiv에 게시된 'HomeSafe-Bench: Evaluating Vision-Language Models on Unsafe Action Detection for Embodied Agents in Household Scenarios' 논문에 대한 자세한 리뷰입니다.#Review#Embodied Agents#Unsafe Action Detection#Vision-Language Models (VLMs)#Household Scenarios#HomeSafe-Bench#HD-Guard#Real-time Safety Monitoring2026년 3월 15일댓글 수 로딩 중
[논문리뷰] Can Vision-Language Models Solve the Shell Game?arXiv에 게시된 'Can Vision-Language Models Solve the Shell Game?' 논문에 대한 자세한 리뷰입니다.#Review#Visual Entity Tracking#Shell Game#Vision-Language Models (VLMs)#VET-Bench#Spatiotemporal Grounded Chain-of-Thought (SGCoT)#NC1-complete#Transformer-based VLMs2026년 3월 15일댓글 수 로딩 중
[논문리뷰] Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial IntelligenceYuning Gong이 arXiv에 게시한 'Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence' 논문에 대한 자세한 리뷰입니다.#Review#3D Spatial Intelligence#Video Stream Processing#Automated Data Curation#3D Gaussian Splatting (3DGS)#Vision-Language Models (VLMs)#Open-Vocabulary Segmentation#Spatial Reasoning#Multimodal Datasets2026년 3월 9일댓글 수 로딩 중
[논문리뷰] AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human GamesarXiv에 게시된 'AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human Games' 논문에 대한 자세한 리뷰입니다.#Review#Artificial General Intelligence (AGI)#Evaluation Benchmark#General Game Playing#Large Language Models (LLMs)#Human-in-the-loop#Cognitive Capabilities#Vision-Language Models (VLMs)#Game Generation2026년 2월 26일댓글 수 로딩 중
[논문리뷰] NarraScore: Bridging Visual Narrative and Musical Dynamics via Hierarchical Affective ControlarXiv에 게시된 'NarraScore: Bridging Visual Narrative and Musical Dynamics via Hierarchical Affective Control' 논문에 대한 자세한 리뷰입니다.#Review#Video-to-Music Generation#Affective Computing#Vision-Language Models (VLMs)#Hierarchical Control#Soundtrack Generation#Temporal Coherence#Emotion-Driven Music2026년 2월 12일댓글 수 로딩 중
[논문리뷰] POINTS-GUI-G: GUI-Grounding JourneyLe Tian이 arXiv에 게시한 'POINTS-GUI-G: GUI-Grounding Journey' 논문에 대한 자세한 리뷰입니다.#Review#GUI Grounding#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Data Engineering#UI Automation#Perception-intensive AI2026년 2월 8일댓글 수 로딩 중
[논문리뷰] AdaptMMBench: Benchmarking Adaptive Multimodal Reasoning for Mode Selection and Reasoning ProcessShilin Yan이 arXiv에 게시한 'AdaptMMBench: Benchmarking Adaptive Multimodal Reasoning for Mode Selection and Reasoning Process' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning#Adaptive Learning#Vision-Language Models (VLMs)#Benchmarking#Mode Selection#Tool Learning#Reasoning Process Evaluation#Matthews Correlation Coefficient (MCC)2026년 2월 3일댓글 수 로딩 중
[논문리뷰] SketchDynamics: Exploring Free-Form Sketches for Dynamic Intent Expression in Animation GenerationHongbo Fu이 arXiv에 게시한 'SketchDynamics: Exploring Free-Form Sketches for Dynamic Intent Expression in Animation Generation' 논문에 대한 자세한 리뷰입니다.#Review#Animation Generation#Free-Form Sketching#Human-AI Interaction#Vision-Language Models (VLMs)#Dynamic Intent Expression#Motion Graphics#Iterative Refinement#Storyboard2026년 1월 28일댓글 수 로딩 중
[논문리뷰] VisGym: Diverse, Customizable, Scalable Environments for Multimodal AgentsarXiv에 게시된 'VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Agents#Vision-Language Models (VLMs)#Interactive AI#Reinforcement Learning Environments#Benchmark#Decision-Making#Diagnostic Tools#Supervised Fine-tuning2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Urban Socio-Semantic Segmentation with Vision-Language ReasoningarXiv에 게시된 'Urban Socio-Semantic Segmentation with Vision-Language Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Urban Segmentation#Socio-Semantic#Vision-Language Models (VLMs)#Reinforcement Learning#Geospatial Data#Multi-modal Reasoning#SAM2026년 1월 15일댓글 수 로딩 중
[논문리뷰] FocusUI: Efficient UI Grounding via Position-Preserving Visual Token SelectionarXiv에 게시된 'FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection' 논문에 대한 자세한 리뷰입니다.#Review#UI Grounding#Visual Token Reduction#Position-Preserving#Vision-Language Models (VLMs)#Saliency Scoring#Computational Efficiency#Human-Computer Interaction2026년 1월 14일댓글 수 로딩 중
[논문리뷰] See Less, See Right: Bi-directional Perceptual Shaping For Multimodal ReasoningarXiv에 게시된 'See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning#Vision-Language Models (VLMs)#Perceptual Shaping#KL-Divergence#Chart Understanding#Data Augmentation#Reinforcement Learning (RL)#GRPO2025년 12월 28일댓글 수 로딩 중
[논문리뷰] GTR-Turbo: Merged Checkpoint is Secretly a Free Teacher for Agentic VLM TrainingYuanchun Shi이 arXiv에 게시한 'GTR-Turbo: Merged Checkpoint is Secretly a Free Teacher for Agentic VLM Training' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Reinforcement Learning#Vision-Language Models (VLMs)#Agentic AI#Knowledge Distillation#Model Merging#PPO#Thought Guidance#Cost Efficiency2025년 12월 25일댓글 수 로딩 중
[논문리뷰] Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language ModelsYu-Lun Liu이 arXiv에 게시한 'Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Popularity Bias#Ordinal Regression#Building Age Estimation#Multi-modal Learning#Benchmark Dataset#Explainable AI2025년 12월 24일댓글 수 로딩 중
[논문리뷰] Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMsarXiv에 게시된 'Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs' 논문에 대한 자세한 리뷰입니다.#Review#Latent Variable Models#Variational Autoencoder (VAE)#Reinforcement Learning (RL)#Exploration#Large Language Models (LLMs)#Vision-Language Models (VLMs)#Controllable Generation#Reasoning Strategies2025년 12월 22일댓글 수 로딩 중
[논문리뷰] VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression?arXiv에 게시된 'VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression?' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Text Compression (VTC)#Long Context Understanding#Vision-Language Models (VLMs)#Benchmark#Information Retrieval#Associative Reasoning#Multimodal AI2025년 12월 17일댓글 수 로딩 중
[논문리뷰] V-REX: Benchmarking Exploratory Visual Reasoning via Chain-of-QuestionsKwesi Cobbina이 arXiv에 게시한 'V-REX: Benchmarking Exploratory Visual Reasoning via Chain-of-Questions' 논문에 대한 자세한 리뷰입니다.#Review#Visual Reasoning#Multi-step Exploration#Chain-of-Questions (CoQ)#Vision-Language Models (VLMs)#Benchmarking#Planning#Following2025년 12월 15일댓글 수 로딩 중
[논문리뷰] Toward Ambulatory Vision: Learning Visually-Grounded Active View SelectionarXiv에 게시된 'Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection' 논문에 대한 자세한 리뷰입니다.#Review#Active Perception#Vision-Language Models (VLMs)#Embodied AI#View Selection#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Visual Question Answering (VQA)#3D Environments2025년 12월 15일댓글 수 로딩 중
[논문리뷰] ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective LearningYujin Han이 arXiv에 게시한 'ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning' 논문에 대한 자세한 리뷰입니다.#Review#Video Editing#Reasoning#Unified Models#Self-Reflective Learning#Vision-Language Models (VLMs)#Diffusion Models#RVE-Bench2025년 12월 11일댓글 수 로딩 중
[논문리뷰] Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning GeneralizationarXiv에 게시된 'Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Vision-Language Models (VLMs)#Visual Reasoning#Generalization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Grounding CoT#Maze Solving2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMsarXiv에 게시된 'Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMs' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Tool-Integrated Reasoning (TIR)#Agentic AI#VQA#Training Environment#Behavioral Cloning#Policy Optimization2025년 11월 25일댓글 수 로딩 중
[논문리뷰] Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual TokensStephanie Fu이 arXiv에 게시한 'Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Chain-of-Thought (CoT)#Continuous Visual Tokens#Multimodal Reasoning#Perceptual Grounding#Visual Thinking#Dense Prediction2025년 11월 24일댓글 수 로딩 중
[논문리뷰] Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language ModelsarXiv에 게시된 'Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Adversarial Attack#Jailbreaking#Reward Hacking#Content Moderation Bypass#Cross-Model Transferability#Safety Vulnerabilities2025년 11월 23일댓글 수 로딩 중
[논문리뷰] First Frame Is the Place to Go for Video Content CustomizationarXiv에 게시된 'First Frame Is the Place to Go for Video Content Customization' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#Content Customization#Few-shot Learning#LoRA#Vision-Language Models (VLMs)#First Frame Conditioning#Reference-based Generation2025년 11월 20일댓글 수 로딩 중
[논문리뷰] Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning BoundariesZhengzhong Tu이 arXiv에 게시한 'Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning Boundaries' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Spatial Reasoning#Controllable Framework#RLVR#GRPO#Maze Navigation#Generalization Boundaries2025년 11월 10일댓글 수 로딩 중
[논문리뷰] Vote-in-Context: Turning VLMs into Zero-Shot Rank FusersarXiv에 게시된 'Vote-in-Context: Turning VLMs into Zero-Shot Rank Fusers' 논문에 대한 자세한 리뷰입니다.#Review#Video Retrieval#Vision-Language Models (VLMs)#Zero-Shot Learning#List-wise Reranking#Rank Fusion#Prompt Engineering#S-Grid#Multimodal Retrieval2025년 11월 9일댓글 수 로딩 중
[논문리뷰] ChartAB: A Benchmark for Chart Grounding & Dense AlignmentarXiv에 게시된 'ChartAB: A Benchmark for Chart Grounding & Dense Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Chart Understanding#Visual Grounding#Dense Alignment#Benchmark#Robustness#Multimodal Learning2025년 10월 31일댓글 수 로딩 중
[논문리뷰] VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept SetarXiv에 게시된 'VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept Set' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Model Interpretability#Sparse Autoencoder (SAE)#Multi-modal Alignment#Concept Learning#Hallucination Elimination#Zero-shot Classification2025년 10월 29일댓글 수 로딩 중
[논문리뷰] RobotArena infty: Scalable Robot Benchmarking via Real-to-Sim TranslationKuan-Hsun Tu이 arXiv에 게시한 'RobotArena infty: Scalable Robot Benchmarking via Real-to-Sim Translation' 논문에 대한 자세한 리뷰입니다.#Review#Robot Benchmarking#Real-to-Sim Translation#Vision-Language Models (VLMs)#Human Preference Learning#Domain Randomization#Robot Manipulation#Simulation Environments#Policy Evaluation2025년 10월 28일댓글 수 로딩 중
[논문리뷰] DSI-Bench: A Benchmark for Dynamic Spatial IntelligencearXiv에 게시된 'DSI-Bench: A Benchmark for Dynamic Spatial Intelligence' 논문에 대한 자세한 리뷰입니다.#Review#Dynamic Spatial Reasoning#Vision-Language Models (VLMs)#Benchmark#Video Understanding#Motion Perception#3D Spatial Intelligence#Hallucinations#Bias2025년 10월 22일댓글 수 로딩 중
[논문리뷰] Learning an Image Editing Model without Image Editing PairsarXiv에 게시된 'Learning an Image Editing Model without Image Editing Pairs' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Diffusion Models#Vision-Language Models (VLMs)#No-Pair Training#Few-step Generation#Distribution Matching#Gradient-based Optimization2025년 10월 17일댓글 수 로딩 중
[논문리뷰] TTRV: Test-Time Reinforcement Learning for Vision Language ModelsSerena Yeung-Levy이 arXiv에 게시한 'TTRV: Test-Time Reinforcement Learning for Vision Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Test-Time Adaptation#Unsupervised Learning#Image Recognition#Visual Question Answering (VQA)#Group Relative Policy Optimization (GRPO)#Entropy Regularization2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Training Vision-Language Process Reward Models for Test-Time Scaling in Multimodal Reasoning: Key Insights and Lessons LearnedarXiv에 게시된 'Training Vision-Language Process Reward Models for Test-Time Scaling in Multimodal Reasoning: Key Insights and Lessons Learned' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Process Reward Models (PRMs)#Multimodal Reasoning#Test-Time Scaling (TTS)#Process Supervision#Dataset Construction#Perception Errors#MCTS2025년 10월 2일댓글 수 로딩 중
[논문리뷰] Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-PlayJing Shi이 arXiv에 게시한 'Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Self-Play#Reinforcement Learning#Gamification#Data Efficiency#Strategic Reasoning#Multimodal AI#Self-Improvement2025년 10월 1일댓글 수 로딩 중
[논문리뷰] OpenGVL - Benchmarking Visual Temporal Progress for Data CurationViktor Petrenko이 arXiv에 게시한 'OpenGVL - Benchmarking Visual Temporal Progress for Data Curation' 논문에 대한 자세한 리뷰입니다.#Review#Robotics Data Curation#Visual Temporal Progress#Generative Value Learning (GVL)#Vision-Language Models (VLMs)#Benchmark#Task Progress Prediction#Value-Order Correlation (VOC)2025년 9월 24일댓글 수 로딩 중
[논문리뷰] Visual Programmability: A Guide for Code-as-Thought in Chart UnderstandingEthan Chern이 arXiv에 게시한 'Visual Programmability: A Guide for Code-as-Thought in Chart Understanding' 논문에 대한 자세한 리뷰입니다.#Review#Visual Programmability#Code-as-Thought (CaT)#Chart Understanding#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Adaptive Reasoning#Dual-Reward System#Multimodal AI2025년 9월 12일댓글 수 로딩 중
[논문리뷰] Focusing by Contrastive Attention: Enhancing VLMs' Visual ReasoningBaolong Bi이 arXiv에 게시한 'Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Visual Reasoning#Attention Mechanisms#Contrastive Learning#Noise Suppression#Visual Complexity#Training-Free2025년 9월 9일댓글 수 로딩 중
[논문리뷰] D-HUMOR: Dark Humor Understanding via Multimodal Open-ended ReasoningDhanvin Sanjay Namboodiri이 arXiv에 게시한 'D-HUMOR: Dark Humor Understanding via Multimodal Open-ended Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Dark Humor Detection#Multimodal Reasoning#Vision-Language Models (VLMs)#Iterative Reasoning Refinement#Meme Analysis#Content Moderation#Cross-Modal Attention#Dataset Annotation2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Robix: A Unified Model for Robot Interaction, Reasoning and PlanningZixuan Wang이 arXiv에 게시한 'Robix: A Unified Model for Robot Interaction, Reasoning and Planning' 논문에 대한 자세한 리뷰입니다.#Review#Robot Learning#Vision-Language Models (VLMs)#Embodied AI#Human-Robot Interaction (HRI)#Task Planning#Reinforcement Learning (RL)#Chain-of-Thought (CoT) Reasoning#Robotics2025년 9월 4일댓글 수 로딩 중
[논문리뷰] LLaVA-Critic-R1: Your Critic Model is Secretly a Strong Policy ModelJianwei Yang이 arXiv에 게시한 'LLaVA-Critic-R1: Your Critic Model is Secretly a Strong Policy Model' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Critic Models#Policy Models#Reinforcement Learning (RL)#Self-Criticism#Multimodal Reasoning#Preference Learning#Generative Models2025년 9월 3일댓글 수 로딩 중
[논문리뷰] IAG: Input-aware Backdoor Attack on VLMs for Visual GroundingDi Zhang이 arXiv에 게시한 'IAG: Input-aware Backdoor Attack on VLMs for Visual Grounding' 논문에 대한 자세한 리뷰입니다.#Review#Backdoor Attack#Vision-Language Models (VLMs)#Visual Grounding#Input-aware Trigger#Adversarial Attack#Security#U-Net#Open-vocabulary2025년 8월 14일댓글 수 로딩 중
[논문리뷰] Adapting Vision-Language Models Without Labels: A Comprehensive SurveyEleni Chatzi이 arXiv에 게시한 'Adapting Vision-Language Models Without Labels: A Comprehensive Survey' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Unsupervised Adaptation#Test-Time Adaptation (TTA)#Domain Transfer#Multimodal Learning#Label-Free Learning#Zero-Shot Learning2025년 8월 11일댓글 수 로딩 중
[논문리뷰] HPSv3: Towards Wide-Spectrum Human Preference ScoreHongsheng Li이 arXiv에 게시한 'HPSv3: Towards Wide-Spectrum Human Preference Score' 논문에 대한 자세한 리뷰입니다.#Review#Human Preference Score#Text-to-Image Generation#Image Evaluation#Vision-Language Models (VLMs)#Uncertainty-Aware Ranking Loss#Dataset#Iterative Refinement#Chain-of-Thought2025년 8월 7일댓글 수 로딩 중