[논문리뷰] RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action ModelsarXiv에 게시된 'RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action Models (VLAs)#Multimodal-Large-Language Models (MLLMs)#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Embodied Reasoning#Low-level Actions#FAST tokenization#Robotics2026년 3월 23일댓글 수 로딩 중
[논문리뷰] Astrolabe: Steering Forward-Process Reinforcement Learning for Distilled Autoregressive Video ModelsJie Huang이 arXiv에 게시한 'Astrolabe: Steering Forward-Process Reinforcement Learning for Distilled Autoregressive Video Models' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#Distilled Autoregressive Models#Reinforcement Learning (RL)#Human Preferences#Streaming Generation#Forward-Process RL#Reward Hacking#Temporal Consistency2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Video-CoE: Reinforcing Video Event Prediction via Chain of EventsarXiv에 게시된 'Video-CoE: Reinforcing Video Event Prediction via Chain of Events' 논문에 대한 자세한 리뷰입니다.#Review#Video Event Prediction (VEP)#Multimodal Large Language Models (MLLMs)#Chain of Events (CoE)#Logical Reasoning#Visual Grounding#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)2026년 3월 18일댓글 수 로딩 중
[논문리뷰] TeamHOI: Learning a Unified Policy for Cooperative Human-Object Interactions with Any Team SizearXiv에 게시된 'TeamHOI: Learning a Unified Policy for Cooperative Human-Object Interactions with Any Team Size' 논문에 대한 자세한 리뷰입니다.#Review#Human-Object Interaction (HOI)#Reinforcement Learning (RL)#Transformer-based Policy#Adversarial Motion Prior (AMP)#Decentralized Policy#Multi-agent Systems#Scalable Coordination2026년 3월 12일댓글 수 로딩 중
[논문리뷰] OpenClaw-RL: Train Any Agent Simply by TalkingarXiv에 게시된 'OpenClaw-RL: Train Any Agent Simply by Talking' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Agentic AI#Online Learning#Next-State Signals#Process Reward Models (PRM)#On-Policy Distillation (OPD)#Multi-Modal Agents2026년 3월 11일댓글 수 로딩 중
[논문리뷰] Fish Audio S2 Technical ReportarXiv에 게시된 'Fish Audio S2 Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Speech (TTS)#Multi-speaker#Multi-turn#Instruction Following#Dual-Autoregressive#Reinforcement Learning (RL)#Data Pipeline#SGLang2026년 3월 10일댓글 수 로딩 중
[논문리뷰] Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware TrainingarXiv에 게시된 'Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training' 논문에 대한 자세한 리뷰입니다.#Review#Financial LLMs#Data-Centric AI#Distillation#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Difficulty-Aware Training#Data Quality2026년 3월 9일댓글 수 로딩 중
[논문리뷰] π-StepNFT: Wider Space Needs Finer Steps in Online RL for Flow-based VLAsarXiv에 게시된 'π-StepNFT: Wider Space Needs Finer Steps in Online RL for Flow-based VLAs' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Flow-based Models#Vision-Language-Action (VLA) Models#Online Learning#Stochastic Differential Equation (SDE)#Contrastive Learning#Embodied AI#Robotics2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Reasoning Models Struggle to Control their Chains of ThoughtarXiv에 게시된 'Reasoning Models Struggle to Control their Chains of Thought' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Model Controllability#AI Safety#Monitorability#Large Language Models (LLMs)#Reinforcement Learning (RL)#Evaluation Suite2026년 3월 8일댓글 수 로딩 중
[논문리뷰] GeoWorld: Geometric World ModelsRichard Hartley이 arXiv에 게시한 'GeoWorld: Geometric World Models' 논문에 대한 자세한 리뷰입니다.#Review#Geometric World Models#Hyperbolic Geometry#Joint-Embedding Predictive Architectures (JEPA)#Reinforcement Learning (RL)#Multi-step Planning#Visual Planning#Energy-Based Models2026년 2월 26일댓글 수 로딩 중
[논문리뷰] SenTSR-Bench: Thinking with Injected Knowledge for Time-Series ReasoningHaotian Lin이 arXiv에 게시한 'SenTSR-Bench: Thinking with Injected Knowledge for Time-Series Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Time-Series Reasoning#Knowledge Injection#Large Language Models (LLMs)#Reinforcement Learning (RL)#Diagnostic AI#Multimodal AI#SenTSR-Bench2026년 2월 23일댓글 수 로딩 중
[논문리뷰] Understanding vs. Generation: Navigating Optimization Dilemma in Multimodal ModelsLiwei Wang이 arXiv에 게시한 'Understanding vs. Generation: Navigating Optimization Dilemma in Multimodal Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Models#Generative AI#Understanding#Reason-Reflect-Refine (R3)#Reinforcement Learning (RL)#Text-to-Image Generation#Optimization Dilemma#Image Editing2026년 2월 17일댓글 수 로딩 중
[논문리뷰] Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active ParametersarXiv에 게시된 'Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Sparse Models#Inference Efficiency#Hybrid Attention#Multi-Token Prediction (MTP)#Reinforcement Learning (RL)#Agentic AI#Long-Context Understanding2026년 2월 11일댓글 수 로딩 중
[논문리뷰] QP-OneModel: A Unified Generative LLM for Multi-Task Query Understanding in Xiaohongshu SearchHui Zhang이 arXiv에 게시한 'QP-OneModel: A Unified Generative LLM for Multi-Task Query Understanding in Xiaohongshu Search' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Query Understanding#Multi-Task Learning#Generative AI#Reinforcement Learning (RL)#Social Network Services (SNS)#Xiaohongshu#Search Engines2026년 2월 11일댓글 수 로딩 중
[논문리뷰] Online Causal Kalman Filtering for Stable and Effective Policy OptimizationarXiv에 게시된 'Online Causal Kalman Filtering for Stable and Effective Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Optimization#Importance Sampling (IS) Ratio#Kalman Filter#Variance Reduction#Math Reasoning2026년 2월 11일댓글 수 로딩 중
[논문리뷰] AgentCPM-Report: Interleaving Drafting and Deepening for Open-Ended Deep ResearcharXiv에 게시된 'AgentCPM-Report: Interleaving Drafting and Deepening for Open-Ended Deep Research' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research#Agentic Systems#Writing As Reasoning Policy (WARP)#Outline Generation#Iterative Refinement#Reinforcement Learning (RL)#Small Language Models2026년 2월 9일댓글 수 로딩 중
[논문리뷰] POINTS-GUI-G: GUI-Grounding JourneyLe Tian이 arXiv에 게시한 'POINTS-GUI-G: GUI-Grounding Journey' 논문에 대한 자세한 리뷰입니다.#Review#GUI Grounding#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Data Engineering#UI Automation#Perception-intensive AI2026년 2월 8일댓글 수 로딩 중
[논문리뷰] Unified Personalized Reward Model for Vision GenerationarXiv에 게시된 'Unified Personalized Reward Model for Vision Generation' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Vision Generation#Personalized Learning#Context-Adaptive Reasoning#Direct Preference Optimization (DPO)#Reinforcement Learning (RL)#Multimodal Learning#Group Relative Policy Optimization (GRPO)2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Self-Improving Pretraining: using post-trained models to pretrain better modelsarXiv에 게시된 'Self-Improving Pretraining: using post-trained models to pretrain better models' 논문에 대한 자세한 리뷰입니다.#Review#Self-Improving Pretraining#Reinforcement Learning (RL)#Large Language Models (LLMs)#Quality Control#Factuality#Safety#Post-trained Models#Pretraining Data Augmentation2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Beyond Imitation: Reinforcement Learning for Active Latent PlanningWee Sun Lee이 arXiv에 게시한 'Beyond Imitation: Reinforcement Learning for Active Latent Planning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Latent Reasoning#Reinforcement Learning (RL)#Variational Autoencoder (VAE)#Active Planning#Numerical Reasoning#Coherence Reward2026년 1월 29일댓글 수 로딩 중
[논문리뷰] LongCat-Flash-Thinking-2601 Technical ReportarXiv에 게시된 'LongCat-Flash-Thinking-2601 Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Reinforcement Learning (RL)#Context Management#Scalable Training#Test-Time Reasoning#Open-Source Model2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMsarXiv에 게시된 'Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Exploration Collapse#Strategy-level Diversity#Uniqueness-Aware Rewarding#Creative Problem Solving#Pass@k2026년 1월 15일댓글 수 로딩 중
[논문리뷰] X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and TestsJane Luo이 arXiv에 게시한 'X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests' 논문에 대한 자세한 리뷰입니다.#Review#Competitive Programming#Code LLMs#Synthetic Data Generation#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Dual Verification#Scaling Laws#SynthSmith2026년 1월 12일댓글 수 로딩 중
[논문리뷰] ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior CalibrationarXiv에 게시된 'ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior Calibration' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool-Integrated Reasoning (TIR)#Agent Behavior Calibration#Reinforcement Learning (RL)#Self-Evolving Data Flywheel#Action Space Exploration#Behavioral Efficiency2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Dr. Zero: Self-Evolving Search Agents without Training DataShaoliang Nie이 arXiv에 게시한 'Dr. Zero: Self-Evolving Search Agents without Training Data' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolution#Search Agents#Large Language Models (LLMs)#Data-Free Learning#Reinforcement Learning (RL)#Hop-Grouped Relative Policy Optimization (HRPO)#Question Answering#Multi-hop Reasoning2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent InteractionZhengkang Guo이 arXiv에 게시한 'Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction' 논문에 대한 자세한 리뷰입니다.#Review#Long-Term Human-Agent Interaction#Controllable Memory#Memory Anchoring#Large Language Models (LLMs)#Personalization#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Memory Dependence2026년 1월 12일댓글 수 로딩 중
[논문리뷰] VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering TwicearXiv에 게시된 'VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice' 논문에 대한 자세한 리뷰입니다.#Review#Video Understanding#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Adaptive Reasoning#Early Exit#Multimodal LLM#Video QA#Temporal Grounding2026년 1월 8일댓글 수 로딩 중
[논문리뷰] Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate ForgettingarXiv에 게시된 'Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Catastrophic Forgetting#Entropy-Adaptive Fine-Tuning (EAFT)#Large Language Models (LLMs)#Domain Adaptation#Reinforcement Learning (RL)#Confident Conflicts2026년 1월 7일댓글 수 로딩 중
[논문리뷰] K-EXAONE Technical ReportarXiv에 게시된 'K-EXAONE Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Language Model#Mixture-of-Experts (MoE)#Long Context#AI Safety#Korean AI#Foundation Model#Reinforcement Learning (RL)2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time ScalingarXiv에 게시된 'Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling' 논문에 대한 자세한 리뷰입니다.#Review#Reasoning#Small Language Models (SLMs)#Hybrid Architecture#Test-Time Scaling (TTS)#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#DeepConf#Computational Efficiency2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Scaling Open-Ended Reasoning to Predict the FuturearXiv에 게시된 'Scaling Open-Ended Reasoning to Predict the Future' 논문에 대한 자세한 리뷰입니다.#Review#Language Models#Forecasting#Open-Ended Reasoning#Reinforcement Learning (RL)#Data Generation#Calibration#Retrieval-Augmented Generation (RAG)#Future Prediction2025년 12월 31일댓글 수 로딩 중
[논문리뷰] Training AI Co-Scientists Using Rubric RewardsarXiv에 게시된 'Training AI Co-Scientists Using Rubric Rewards' 논문에 대한 자세한 리뷰입니다.#Review#AI Co-Scientists#Research Plan Generation#Reinforcement Learning (RL)#Self-Grading#Rubric Rewards#Language Models (LLMs)#Scientific Discovery2025년 12월 29일댓글 수 로딩 중
[논문리뷰] See Less, See Right: Bi-directional Perceptual Shaping For Multimodal ReasoningarXiv에 게시된 'See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning#Vision-Language Models (VLMs)#Perceptual Shaping#KL-Divergence#Chart Understanding#Data Augmentation#Reinforcement Learning (RL)#GRPO2025년 12월 28일댓글 수 로딩 중
[논문리뷰] Multi-hop Reasoning via Early Knowledge AlignmentXuanjing Huang이 arXiv에 게시한 'Multi-hop Reasoning via Early Knowledge Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Retrieval-Augmented Generation (RAG)#Multi-hop Reasoning#Reinforcement Learning (RL)#Knowledge Alignment#Iterative RAG#Entropy Analysis#Plan Failure2025년 12월 24일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning for Self-Improving Agent with Skill LibrarySoumya Smruti Mishra이 arXiv에 게시한 'Reinforcement Learning for Self-Improving Agent with Skill Library' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#LLM Agents#Skill Library#Self-Improvement#Sequential Rollout#AppWorld dataset#GRPO2025년 12월 23일댓글 수 로딩 중
[논문리뷰] Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMsarXiv에 게시된 'Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs' 논문에 대한 자세한 리뷰입니다.#Review#Latent Variable Models#Variational Autoencoder (VAE)#Reinforcement Learning (RL)#Exploration#Large Language Models (LLMs)#Vision-Language Models (VLMs)#Controllable Generation#Reasoning Strategies2025년 12월 22일댓글 수 로딩 중
[논문리뷰] Robust-R1: Degradation-Aware Reasoning for Robust Visual UnderstandingRuntao Liu이 arXiv에 게시한 'Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Visual Degradation#Robustness#Reasoning Chains#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Degradation-Aware Reasoning#Interpretability2025년 12월 21일댓글 수 로딩 중
[논문리뷰] Skyra: AI-Generated Video Detection via Grounded Artifact ReasoningarXiv에 게시된 'Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#AI-Generated Video Detection#Multimodal Large Language Model (MLLM)#Artifact Reasoning#Explainable AI#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Video Forensics2025년 12월 17일댓글 수 로딩 중
[논문리뷰] Toward Ambulatory Vision: Learning Visually-Grounded Active View SelectionarXiv에 게시된 'Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection' 논문에 대한 자세한 리뷰입니다.#Review#Active Perception#Vision-Language Models (VLMs)#Embodied AI#View Selection#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Visual Question Answering (VQA)#3D Environments2025년 12월 15일댓글 수 로딩 중
[논문리뷰] On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language ModelsarXiv에 게시된 'On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Pre-training#Mid-training#Reasoning LMs#Generalization#Synthetic Reasoning Tasks#Process-level Supervision2025년 12월 8일댓글 수 로딩 중
[논문리뷰] EditThinker: Unlocking Iterative Reasoning for Any Image EditorZiyu Guo이 arXiv에 게시한 'EditThinker: Unlocking Iterative Reasoning for Any Image Editor' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Iterative Reasoning#Multimodal Large Language Model (MLLM)#Reinforcement Learning (RL)#Instruction Following#Critique-Refine-Repeat Cycle#Think-while-Edit2025년 12월 7일댓글 수 로딩 중
[논문리뷰] On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death SpiralChristos Thrampoulidis이 arXiv에 게시한 'On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death Spiral' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Tool-Integrated Reasoning (TIR)#GRPO#Training Stability#Lazy Likelihood Displacement (LLD)#Regularization#Search-R12025년 12월 4일댓글 수 로딩 중
[논문리뷰] Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning GeneralizationarXiv에 게시된 'Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Vision-Language Models (VLMs)#Visual Reasoning#Generalization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Grounding CoT#Maze Solving2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Artemis: Structured Visual Reasoning for Perception Policy LearningPiotr Koniusz이 arXiv에 게시한 'Artemis: Structured Visual Reasoning for Perception Policy Learning' 논문에 대한 자세한 리뷰입니다.#Review#Visual Reasoning#Multimodal Large Language Models (MLLM)#Reinforcement Learning (RL)#Perception Policy Learning#Object Grounding#Object Detection#Structured Output2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Stabilizing Reinforcement Learning with LLMs: Formulation and PracticesarXiv에 게시된 'Stabilizing Reinforcement Learning with LLMs: Formulation and Practices' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Gradient#REINFORCE#Mixture-of-Experts (MoE)#Training Stability#Importance Sampling#Routing Replay#Off-policy Learning2025년 12월 1일댓글 수 로딩 중
[논문리뷰] DeepSeekMath-V2: Towards Self-Verifiable Mathematical ReasoningarXiv에 게시된 'DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mathematical Reasoning#Large Language Models (LLMs)#Proof Verification#Self-Verification#Reinforcement Learning (RL)#Theorem Proving#Meta-Verification#Iterative Refinement2025년 11월 30일댓글 수 로딩 중
[논문리뷰] Monet: Reasoning in Latent Visual Space Beyond Images and LanguagePengfei Wan이 arXiv에 게시한 'Monet: Reasoning in Latent Visual Space Beyond Images and Language' 논문에 대한 자세한 리뷰입니다.#Review#Latent Visual Reasoning#Multimodal Large Language Models (MLLMs)#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Visual-latent Policy Optimization (VLPO)#Chain-of-Thought (CoT)#Abstract Visual Thinking2025년 11월 26일댓글 수 로딩 중
[논문리뷰] MobileVLA-R1: Reinforcing Vision-Language-Action for Mobile RobotsRui Yang이 arXiv에 게시한 'MobileVLA-R1: Reinforcing Vision-Language-Action for Mobile Robots' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Mobile Robotics#Quadruped Robots#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Embodied AI#Multimodal Perception2025년 11월 26일댓글 수 로딩 중
[논문리뷰] Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMsarXiv에 게시된 'Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMs' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Tool-Integrated Reasoning (TIR)#Agentic AI#VQA#Training Environment#Behavioral Cloning#Policy Optimization2025년 11월 25일댓글 수 로딩 중
[논문리뷰] Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual GenerationXinyan Chen이 arXiv에 게시한 'Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation' 논문에 대한 자세한 리뷰입니다.#Review#Visual Generation#Textual Reasoning#Interleaving#Large Multimodal Models (LMMs)#Chain-of-Thought (CoT)#Zero-shot Learning#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)2025년 11월 20일댓글 수 로딩 중
[논문리뷰] WMPO: World Model-based Policy Optimization for Vision-Language-Action ModelsarXiv에 게시된 'WMPO: World Model-based Policy Optimization for Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Reinforcement Learning (RL)#Model-based RL#World Models#Policy Optimization#Robotics#Sample Efficiency#Self-correction2025년 11월 12일댓글 수 로딩 중
[논문리뷰] LoopTool: Closing the Data-Training Loop for Robust LLM Tool CallsarXiv에 게시된 'LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool Learning#Data Generation#Model Training#Closed-Loop Framework#Reinforcement Learning (RL)#Data Refinement#Self-Correction2025년 11월 12일댓글 수 로딩 중
[논문리뷰] Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5BarXiv에 게시된 'Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B' 논문에 대한 자세한 리뷰입니다.#Review#Small Language Models#Reasoning#Diversity Optimization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Spectrum-to-Signal Principle (SSP)#Mathematical Reasoning#Code Generation2025년 11월 11일댓글 수 로딩 중
[논문리뷰] Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning BoundariesZhengzhong Tu이 arXiv에 게시한 'Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning Boundaries' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Spatial Reasoning#Controllable Framework#RLVR#GRPO#Maze Navigation#Generalization Boundaries2025년 11월 10일댓글 수 로딩 중
[논문리뷰] π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action ModelsarXiv에 게시된 'π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Vision-Language-Action Models (VLAs)#Flow-based Models#Policy Optimization#Robotics#Flow Matching#SDE#MDP2025년 11월 9일댓글 수 로딩 중
[논문리뷰] PORTool: Tool-Use LLM Training with Rewarded TreearXiv에 게시된 'PORTool: Tool-Use LLM Training with Rewarded Tree' 논문에 대한 자세한 리뷰입니다.#Review#Tool-Use LLM#Reinforcement Learning (RL)#Policy Optimization#Rewarded Tree#Trajectory Optimization#Agentic System#Dynamic Tool Call2025년 10월 31일댓글 수 로딩 중
[논문리뷰] MedVLSynther: Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMsarXiv에 게시된 'MedVLSynther: Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs' 논문에 대한 자세한 리뷰입니다.#Review#Medical VQA#Large Multimodal Models (LMMs)#Data Synthesis#Generator-Verifier Framework#Rubric-Guided#Reinforcement Learning (RL)#Context-Aware2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Evolving Diagnostic Agents in a Virtual Clinical EnvironmentarXiv에 게시된 'Evolving Diagnostic Agents in a Virtual Clinical Environment' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Diagnostic Agents#Reinforcement Learning (RL)#Virtual Clinical Environment#Medical AI#Multi-turn Diagnosis#EHR (Electronic Health Records)2025년 10월 30일댓글 수 로딩 중
[논문리뷰] UI-Ins: Enhancing GUI Grounding with Multi-Perspective Instruction-as-ReasoningarXiv에 게시된 'UI-Ins: Enhancing GUI Grounding with Multi-Perspective Instruction-as-Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#GUI Grounding#Natural Language Instructions#Multi-Perspective Reasoning#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Policy Collapse Mitigation#GUI Agents2025년 10월 27일댓글 수 로딩 중
[논문리뷰] Reasoning with Sampling: Your Base Model is Smarter Than You ThinkarXiv에 게시된 'Reasoning with Sampling: Your Base Model is Smarter Than You Think' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#MCMC#Sampling#Reasoning#Distribution Sharpening#Reinforcement Learning (RL)#Inference-time Optimization#Training-free2025년 10월 27일댓글 수 로딩 중
[논문리뷰] Distractor Injection Attacks on Large Reasoning Models: Characterization and DefensearXiv에 게시된 'Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Prompt Injection#Adversarial Attack#Reasoning Distraction#Chain-of-Thought#Robustness#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)2025년 10월 21일댓글 수 로딩 중
[논문리뷰] Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMsHao Zhang이 arXiv에 게시한 'Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reinforcement Learning (RL)#Multi-Agent Systems (MAS)#On-Policy RL#Collaborative AI#Agentic LLMs#Group-based Optimization2025년 10월 16일댓글 수 로딩 중
[논문리뷰] MATH-Beyond: A Benchmark for RL to Expand Beyond the Base ModelWieland Brendel이 arXiv에 게시한 'MATH-Beyond: A Benchmark for RL to Expand Beyond the Base Model' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Mathematical Reasoning#Benchmark#Large Language Models (LLMs)#Exploration#Boundary Expansion#MATH-Beyond2025년 10월 16일댓글 수 로딩 중
[논문리뷰] ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement LearningarXiv에 게시된 'ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Vision Language Models (VLMs)#Reinforcement Learning (RL)#Prior Learning#Supervised Fine-tuning (SFT)#Embodied Agents2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Which Heads Matter for Reasoning? RL-Guided KV Cache CompressionHuan Wang이 arXiv에 게시한 'Which Heads Matter for Reasoning? RL-Guided KV Cache Compression' 논문에 대한 자세한 리뷰입니다.#Review#KV Cache Compression#Large Language Models (LLMs)#Reinforcement Learning (RL)#Reasoning Models#Attention Heads#Chain-of-Thought (CoT)#Memory Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining LevelsarXiv에 게시된 'Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Data Pipeline#Web-scale Data#Question-Answering (QA)#Data Generation#Data Diversity#Data Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] TTRV: Test-Time Reinforcement Learning for Vision Language ModelsSerena Yeung-Levy이 arXiv에 게시한 'TTRV: Test-Time Reinforcement Learning for Vision Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Test-Time Adaptation#Unsupervised Learning#Image Recognition#Visual Question Answering (VQA)#Group Relative Policy Optimization (GRPO)#Entropy Regularization2025년 10월 9일댓글 수 로딩 중
[논문리뷰] In-the-Flow Agentic System Optimization for Effective Planning and Tool UsearXiv에 게시된 'In-the-Flow Agentic System Optimization for Effective Planning and Tool Use' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Systems#Large Language Models (LLMs)#Tool Use#Reinforcement Learning (RL)#On-policy Optimization#Flow-based Group Refined Policy Optimization (Flow-GRPO)#Multi-turn Reasoning2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Modelszeliang0426이 arXiv에 게시한 'Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Models' 논문에 대한 자세한 리뷰입니다.#Review#Video Reasoning#Large Multimodal Models (LMMs)#Post-training#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Test-Time Scaling (TTS)#Chain-of-Thought (CoT)2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM TrainingarXiv에 게시된 'Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Adaptive Sampling#Policy Gradient#Reward Optimization#Signal Collapse#Variance Reduction2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Knapsack RL: Unlocking Exploration of LLMs via Optimizing Budget AllocationarXiv에 게시된 'Knapsack RL: Unlocking Exploration of LLMs via Optimizing Budget Allocation' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reinforcement Learning (RL)#Exploration Budget Allocation#Knapsack Problem#Group Relative Policy Optimization (GRPO)#Mathematical Reasoning#Resource Optimization2025년 10월 2일댓글 수 로딩 중
[논문리뷰] Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post TrainingarXiv에 게시된 'Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post Training' 논문에 대한 자세한 리뷰입니다.#Review#Mechanistic Interpretability#Attention Heads#Post-Training#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Circuit Analysis#Reasoning Models#Transformer Architecture2025년 10월 1일댓글 수 로딩 중
[논문리뷰] Residual Off-Policy RL for Finetuning Behavior Cloning PoliciesPieter Abbeel이 arXiv에 게시한 'Residual Off-Policy RL for Finetuning Behavior Cloning Policies' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Behavior Cloning (BC)#Residual Learning#Off-Policy RL#Robot Manipulation#Real-World Robotics#High-DoF Systems#Sample Efficiency2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Logics-Parsing Technical ReportFan Yang이 arXiv에 게시한 'Logics-Parsing Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Document Parsing#Large Vision-Language Models (LVLM)#Reinforcement Learning (RL)#Layout Analysis#Reading Order#Supervised Fine-Tuning (SFT)#HTML Annotation#Benchmarking2025년 9월 25일댓글 수 로딩 중
[논문리뷰] GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric ReasoningHou Pong Chan이 arXiv에 게시한 'GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Geometric Reasoning#Visual Perception#Reinforcement Learning (RL)#Two-stage Training#GeoPQA Benchmark#Perceptual Bottleneck2025년 9월 23일댓글 수 로딩 중
[논문리뷰] A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement LearningJiangmiao이 arXiv에 게시한 'A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Robotics#Reinforcement Learning (RL)#Vision-Language-Action (VLA) Models#Reward Modeling#Human-in-the-Loop#Dense Rewards#Generalization#Autoregressive Models2025년 9월 22일댓글 수 로딩 중
[논문리뷰] Improving Context Fidelity via Native Retrieval-Augmented ReasoningXiangru Tang이 arXiv에 게시한 'Improving Context Fidelity via Native Retrieval-Augmented Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Context Fidelity#Retrieval-Augmented Generation (RAG)#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Hallucination#Question Answering#In-context Retrieval#Curriculum Learning2025년 9월 18일댓글 수 로딩 중
[논문리뷰] Visual Programmability: A Guide for Code-as-Thought in Chart UnderstandingEthan Chern이 arXiv에 게시한 'Visual Programmability: A Guide for Code-as-Thought in Chart Understanding' 논문에 대한 자세한 리뷰입니다.#Review#Visual Programmability#Code-as-Thought (CaT)#Chart Understanding#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Adaptive Reasoning#Dual-Reward System#Multimodal AI2025년 9월 12일댓글 수 로딩 중
[논문리뷰] SimpleVLA-RL: Scaling VLA Training via Reinforcement LearningZhaohui Yang이 arXiv에 게시한 'SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Vision-Language-Action (VLA) Models#Robotic Manipulation#Data Scarcity#Generalization#Sim-to-Real Transfer#Online RL#Long-Horizon Planning2025년 9월 12일댓글 수 로딩 중
[논문리뷰] WebExplorer: Explore and Evolve for Training Long-Horizon Web AgentsAili Chen이 arXiv에 게시한 'WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Web Agents#Long-Horizon Reasoning#Large Language Models (LLMs)#Data Generation#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Web Navigation#Information Retrieval2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Scaling up Multi-Turn Off-Policy RL and Multi-Agent Tree Search for LLM Step-ProversXia Xiao이 arXiv에 게시한 'Scaling up Multi-Turn Off-Policy RL and Multi-Agent Tree Search for LLM Step-Provers' 논문에 대한 자세한 리뷰입니다.#Review#LLM Step-Provers#Reinforcement Learning (RL)#Off-Policy RL#Multi-Agent Systems#Tree Search#Automated Theorem Proving (ATP)#Formal Mathematics#AlphaZero2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Bootstrapping Task Spaces for Self-ImprovementYoram Bachrach이 arXiv에 게시한 'Bootstrapping Task Spaces for Self-Improvement' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Self-Improvement#Autocurriculum#Task-Space Exploration#Inference-Time Iteration#Policy Optimization2025년 9월 8일댓글 수 로딩 중
[논문리뷰] Towards a Unified View of Large Language Model Post-TrainingHongyi Liu이 arXiv에 게시한 'Towards a Unified View of Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Post-Training#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Policy Gradient#Unified Framework#Hybrid Algorithms#Bias-Variance Tradeoff2025년 9월 5일댓글 수 로딩 중
[논문리뷰] Robix: A Unified Model for Robot Interaction, Reasoning and PlanningZixuan Wang이 arXiv에 게시한 'Robix: A Unified Model for Robot Interaction, Reasoning and Planning' 논문에 대한 자세한 리뷰입니다.#Review#Robot Learning#Vision-Language Models (VLMs)#Embodied AI#Human-Robot Interaction (HRI)#Task Planning#Reinforcement Learning (RL)#Chain-of-Thought (CoT) Reasoning#Robotics2025년 9월 4일댓글 수 로딩 중
[논문리뷰] LLaVA-Critic-R1: Your Critic Model is Secretly a Strong Policy ModelJianwei Yang이 arXiv에 게시한 'LLaVA-Critic-R1: Your Critic Model is Secretly a Strong Policy Model' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Critic Models#Policy Models#Reinforcement Learning (RL)#Self-Criticism#Multimodal Reasoning#Preference Learning#Generative Models2025년 9월 3일댓글 수 로딩 중
[논문리뷰] R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Annealing and Reinforce LearningHan Hu이 arXiv에 게시한 'R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Annealing and Reinforce Learning' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Auto-Thinking#Reinforcement Learning (RL)#Bi-mode Annealing#Bi-mode Policy Optimization (BPO)#General-Purpose AI#Reasoning#Efficiency2025년 9월 1일댓글 수 로딩 중
[논문리뷰] Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build EnvironmentsXuesong Yao이 arXiv에 게시한 'Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool Use#Reinforcement Learning (RL)#Automated Environment Generation#Feedback-Driven Training#Reward Mechanism#Contextual Understanding2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning in Vision: A SurveyQingwei Meng이 arXiv에 게시한 'Reinforcement Learning in Vision: A Survey' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Computer Vision (CV)#Multimodal Large Language Models (MLLMs)#Visual Generation#Vision-Language-Action (VLA) Models#Policy Optimization#Reward Modeling2025년 8월 12일댓글 수 로딩 중
[논문리뷰] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward RectificationXinyu Ye이 arXiv에 게시한 'On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Generalization#Reward Rectification#Dynamic Fine-Tuning (DFT)#LLM#Policy Gradient#Mathematical Reasoning2025년 8월 8일댓글 수 로딩 중
[논문리뷰] Tool-integrated Reinforcement Learning for Repo Deep SearchYanzhen Zou이 arXiv에 게시한 'Tool-integrated Reinforcement Learning for Repo Deep Search' 논문에 대한 자세한 리뷰입니다.#Review#Issue Localization#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Tool-integrated Agents#Software Engineering#Code Search2025년 8월 6일댓글 수 로딩 중