[논문리뷰] TerraScope: Pixel-Grounded Visual Reasoning for Earth ObservationBegüm Demir이 arXiv에 게시한 'TerraScope: Pixel-Grounded Visual Reasoning for Earth Observation' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Earth Observation (EO)#Pixel-Grounded Reasoning#Chain-of-Thought (CoT)#Multi-Modal Reasoning#Multi-Temporal Reasoning#Geospatial Reasoning2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Video Streaming Thinking: VideoLLMs Can Watch and Think SimultaneouslyarXiv에 게시된 'Video Streaming Thinking: VideoLLMs Can Watch and Think Simultaneously' 논문에 대한 자세한 리뷰입니다.#Review#Streaming Video Understanding#VideoLLMs#Chain-of-Thought (CoT)#Real-time AI#Reinforcement Learning#Knowledge Graphs#Streaming Thinking#Low Latency2026년 3월 15일댓글 수 로딩 중
[논문리뷰] Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware TrainingarXiv에 게시된 'Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training' 논문에 대한 자세한 리뷰입니다.#Review#Financial LLMs#Data-Centric AI#Distillation#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Difficulty-Aware Training#Data Quality2026년 3월 9일댓글 수 로딩 중
[논문리뷰] Reasoning Models Struggle to Control their Chains of ThoughtarXiv에 게시된 'Reasoning Models Struggle to Control their Chains of Thought' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Model Controllability#AI Safety#Monitorability#Large Language Models (LLMs)#Reinforcement Learning (RL)#Evaluation Suite2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-TuningYuki M. Asano이 arXiv에 게시한 'Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-tuning (SFT)#Chain-of-Thought (CoT)#Data Repetition#Data Scaling#LLM Training#Generalization#Overfitting#Reasoning Models2026년 2월 11일댓글 수 로딩 중
[논문리뷰] LatentChem: From Textual CoT to Latent Thinking in Chemical ReasoningJia Zhang이 arXiv에 게시한 'LatentChem: From Textual CoT to Latent Thinking in Chemical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Chemical Reasoning#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Latent Space#Molecular Optimization#Inference Efficiency#Reinforcement Learning#Chemical AI2026년 2월 9일댓글 수 로딩 중
[논문리뷰] V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal RetrievalZeyu Zhang이 arXiv에 게시한 'V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal Retrieval' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Retrieval#Agentic AI#Large Language Models (LLMs)#Visual Tools#Chain-of-Thought (CoT)#Reinforcement Learning#Curriculum Learning#Evidence-Driven Reasoning2026년 2월 5일댓글 수 로딩 중
[논문리뷰] Latent Chain-of-Thought as Planning: Decoupling Reasoning from VerbalizationarXiv에 게시된 'Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization' 논문에 대한 자세한 리뷰입니다.#Review#Latent Reasoning#Chain-of-Thought (CoT)#Large Language Models (LLMs)#Planning#Reinforcement Learning#Mathematical Reasoning#Decoupling#Interpretability2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Beyond Imitation: Reinforcement Learning for Active Latent PlanningWee Sun Lee이 arXiv에 게시한 'Beyond Imitation: Reinforcement Learning for Active Latent Planning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Latent Reasoning#Reinforcement Learning (RL)#Variational Autoencoder (VAE)#Active Planning#Numerical Reasoning#Coherence Reward2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Visual Generation Unlocks Human-Like Reasoning through Multimodal World ModelsarXiv에 게시된 'Visual Generation Unlocks Human-Like Reasoning through Multimodal World Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal AI#World Models#Visual Generation#Chain-of-Thought (CoT)#Multimodal Reasoning#Unified Multimodal Models#Spatial-Physical Reasoning2026년 1월 27일댓글 수 로딩 중
[논문리뷰] Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent ReasoningarXiv에 게시된 'Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Large Language Models (LLMs)#Vision Language Models (VLMs)#Latent Reasoning#Visual Modality#Image Rendering#Computational Efficiency#Knowledge Distillation2026년 1월 21일댓글 수 로딩 중
[논문리뷰] VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic MemoryarXiv에 게시된 'VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory' 논문에 대한 자세한 리뷰입니다.#Review#Embodied Navigation#VLA Model#Adaptive Reasoning#Chain-of-Thought (CoT)#Linguistic Memory#Reinforcement Learning#Sim-to-Real Transfer#Multi-task Learning2026년 1월 13일댓글 수 로딩 중
[논문리뷰] VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering TwicearXiv에 게시된 'VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice' 논문에 대한 자세한 리뷰입니다.#Review#Video Understanding#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Adaptive Reasoning#Early Exit#Multimodal LLM#Video QA#Temporal Grounding2026년 1월 8일댓글 수 로딩 중
[논문리뷰] DraCo: Draft as CoT for Text-to-Image Preview and Rare Concept GenerationZiyu Guo이 arXiv에 게시한 'DraCo: Draft as CoT for Text-to-Image Preview and Rare Concept Generation' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Image Generation#Chain-of-Thought (CoT)#Multimodal Large Language Models (MLLMs)#Visual Planning#Rare Concept Generation#Drafting#Classifier-Free Guidance (CFG)#Image Refinement2025년 12월 4일댓글 수 로딩 중
[논문리뷰] Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning GeneralizationarXiv에 게시된 'Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Vision-Language Models (VLMs)#Visual Reasoning#Generalization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Grounding CoT#Maze Solving2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Monet: Reasoning in Latent Visual Space Beyond Images and LanguagePengfei Wan이 arXiv에 게시한 'Monet: Reasoning in Latent Visual Space Beyond Images and Language' 논문에 대한 자세한 리뷰입니다.#Review#Latent Visual Reasoning#Multimodal Large Language Models (MLLMs)#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Visual-latent Policy Optimization (VLPO)#Chain-of-Thought (CoT)#Abstract Visual Thinking2025년 11월 26일댓글 수 로딩 중
[논문리뷰] MobileVLA-R1: Reinforcing Vision-Language-Action for Mobile RobotsRui Yang이 arXiv에 게시한 'MobileVLA-R1: Reinforcing Vision-Language-Action for Mobile Robots' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Mobile Robotics#Quadruped Robots#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Embodied AI#Multimodal Perception2025년 11월 26일댓글 수 로딩 중
[논문리뷰] Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual TokensStephanie Fu이 arXiv에 게시한 'Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Chain-of-Thought (CoT)#Continuous Visual Tokens#Multimodal Reasoning#Perceptual Grounding#Visual Thinking#Dense Prediction2025년 11월 24일댓글 수 로딩 중
[논문리뷰] Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual GenerationXinyan Chen이 arXiv에 게시한 'Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation' 논문에 대한 자세한 리뷰입니다.#Review#Visual Generation#Textual Reasoning#Interleaving#Large Multimodal Models (LMMs)#Chain-of-Thought (CoT)#Zero-shot Learning#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)2025년 11월 20일댓글 수 로딩 중
[논문리뷰] VIDEOP2R: Video Understanding from Perception to ReasoningarXiv에 게시된 'VIDEOP2R: Video Understanding from Perception to Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Video Understanding#Reinforcement Fine-Tuning (RFT)#Large Video Language Models (LVLMs)#Perception and Reasoning#Chain-of-Thought (CoT)#Process-Aware Learning#Policy Optimization#Credit Assignment2025년 11월 18일댓글 수 로딩 중
[논문리뷰] AffordBot: 3D Fine-grained Embodied Reasoning via Multimodal Large Language ModelsZhen Li이 arXiv에 게시한 'AffordBot: 3D Fine-grained Embodied Reasoning via Multimodal Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#3D Embodied Reasoning#Multimodal Large Language Models (MLLMs)#Chain-of-Thought (CoT)#Affordance Grounding#Motion Estimation#View Synthesis#Active Perception2025년 11월 13일댓글 수 로딩 중
[논문리뷰] SofT-GRPO: Surpassing Discrete-Token LLM Reinforcement Learning via Gumbel-Reparameterized Soft-Thinking Policy OptimizationarXiv에 게시된 'SofT-GRPO: Surpassing Discrete-Token LLM Reinforcement Learning via Gumbel-Reparameterized Soft-Thinking Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Reinforcement Learning#Soft-Thinking#Gumbel Reparameterization#Policy Optimization#Chain-of-Thought (CoT)#GRPO2025년 11월 10일댓글 수 로딩 중
[논문리뷰] Reasoning with Confidence: Efficient Verification of LLM Reasoning Steps via Uncertainty HeadsJiaheng Zhang이 arXiv에 게시한 'Reasoning with Confidence: Efficient Verification of LLM Reasoning Steps via Uncertainty Heads' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning Verification#Uncertainty Quantification (UQ)#UHeads#Process Reward Models (PRMs)#Chain-of-Thought (CoT)#Self-Supervised Learning#Computational Efficiency#Domain Generalization2025년 11월 10일댓글 수 로딩 중
[논문리뷰] When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-ThoughtarXiv에 게시된 'When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal AI#Visual Reasoning#Chain-of-Thought (CoT)#Benchmark#Image Generation#MLLMs#Visual-CoT2025년 11월 9일댓글 수 로딩 중
[논문리뷰] UniREditBench: A Unified Reasoning-based Image Editing BenchmarkarXiv에 게시된 'UniREditBench: A Unified Reasoning-based Image Editing Benchmark' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Reasoning-based AI#Benchmark#Multimodal Learning#Chain-of-Thought (CoT)#Dual-Reference Evaluation#Generative Models#Game AI2025년 11월 9일댓글 수 로딩 중
[논문리뷰] SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit TokensarXiv에 게시된 'SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Implicit Reasoning#LLMs#Semantic Alignment#Efficiency Optimization#Knowledge Distillation2025년 11월 9일댓글 수 로딩 중
[논문리뷰] FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function CallingarXiv에 게시된 'FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling' 논문에 대한 자세한 리뷰입니다.#Review#Function Calling#Multi-Turn Interaction#Large Language Models (LLMs)#Data Synthesis#Agentic AI#Tool Use#Chain-of-Thought (CoT)#Reinforcement Learning2025년 10월 29일댓글 수 로딩 중
[논문리뷰] Reasoning in Space via Grounding in the WorldLi Zhang이 arXiv에 게시한 'Reasoning in Space via Grounding in the World' 논문에 대한 자세한 리뷰입니다.#Review#3D Visual Grounding#Spatial Reasoning#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Hybrid Representation#Multi-modal LLMs#Point Clouds2025년 10월 16일댓글 수 로딩 중
[논문리뷰] ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and InterpretabilityTsui-Wei Weng이 arXiv에 게시한 'ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and Interpretability' 논문에 대한 자세한 리뷰입니다.#Review#Trustworthy AI#Large Reasoning Models (LRMs)#Interpretability#Faithfulness#Reliability#Chain-of-Thought (CoT)#Supervised Fine-tuning (SFT)#GRPO2025년 10월 15일댓글 수 로딩 중
[논문리뷰] LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking TokensarXiv에 게시된 'LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Machine Translation (MT)#Chain-of-Thought (CoT)#Knowledge Distillation#Fine-tuning#Prompt Engineering#Synthetic Data2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Which Heads Matter for Reasoning? RL-Guided KV Cache CompressionHuan Wang이 arXiv에 게시한 'Which Heads Matter for Reasoning? RL-Guided KV Cache Compression' 논문에 대한 자세한 리뷰입니다.#Review#KV Cache Compression#Large Language Models (LLMs)#Reinforcement Learning (RL)#Reasoning Models#Attention Heads#Chain-of-Thought (CoT)#Memory Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] First Try Matters: Revisiting the Role of Reflection in Reasoning ModelsWee Sun Lee이 arXiv에 게시한 'First Try Matters: Revisiting the Role of Reflection in Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reasoning#Chain-of-Thought (CoT)#Reflection#Early Stopping#Supervised Fine-tuning (SFT)#Token Efficiency#Mathematical Reasoning2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-ThoughtarXiv에 게시된 'Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Reasoning#Chain-of-Thought (CoT)#Language-Mixed CoT#Instruction Tuning#Korean LLMs#Data Curation#Supervised Fine-tuning (SFT)2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model ReasoningZhuoshi Pan이 arXiv에 게시한 'Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Code-Assisted Reasoning#Chain-of-Thought (CoT)#Instruction Tuning#Data Augmentation#LLMs#Mathematical Reasoning#Self-Verification#Code Generation2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Modelszeliang0426이 arXiv에 게시한 'Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Models' 논문에 대한 자세한 리뷰입니다.#Review#Video Reasoning#Large Multimodal Models (LMMs)#Post-training#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Test-Time Scaling (TTS)#Chain-of-Thought (CoT)2025년 10월 7일댓글 수 로딩 중
[논문리뷰] CARFT: Boosting LLM Reasoning via Contrastive Learning with Annotated Chain-of-Thought-based Reinforced Fine-TuningYulun Zhang이 arXiv에 게시한 'CARFT: Boosting LLM Reasoning via Contrastive Learning with Annotated Chain-of-Thought-based Reinforced Fine-Tuning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Contrastive Learning#Reinforcement Learning#Fine-tuning#Chain-of-Thought (CoT)#Annotated Data#Model Stability2025년 8월 25일댓글 수 로딩 중
[논문리뷰] Web-CogReasoner: Towards Knowledge-Induced Cognitive Reasoning for Web AgentsXinyu Yang이 arXiv에 게시한 'Web-CogReasoner: Towards Knowledge-Induced Cognitive Reasoning for Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Web Agent#Cognitive Reasoning#Knowledge-Induced#Large Multimodal Models (LMMs)#Bloom's Taxonomy#Chain-of-Thought (CoT)#Web-CogDataset#Web-CogBench2025년 8월 7일댓글 수 로딩 중
[논문리뷰] LaTCoder: Converting Webpage Design to Code with Layout-as-ThoughtTianpeng Lv이 arXiv에 게시한 'LaTCoder: Converting Webpage Design to Code with Layout-as-Thought' 논문에 대한 자세한 리뷰입니다.#Review#Design-to-Code#Webpage Generation#Multimodal Large Language Models (MLLMs)#Layout Preservation#Chain-of-Thought (CoT)#UI Automation#Code Generation2025년 8월 7일댓글 수 로딩 중