[논문리뷰] ThinkTwice: Jointly Optimizing Large Language Models for Reasoning and Self-Refinement본 논문은 Reasoning 최적화와 Self-Refinement 최적화를 하나의 GRPO 프레임워크 안에서 결합한 ThinkTwice를 제안합니다. ThinkTwice는 각 훈련 단계에서 모델이 먼저 Reasoning 문제를 풀고, 동일한 문제에 대해 자신의 이전 답변을 개선하는(Thinking twice) 과정을 연속적으로 수행합니다 .#Review#Large Language Models#Reinforcement Learning#Reasoning#Self-Refinement#RLVR#Policy Optimization#Implicit Curriculum2026년 4월 7일댓글 수 로딩 중
[논문리뷰] Unifying Group-Relative and Self-Distillation Policy Optimization via Sample Routing저자들은 샘플의 학습 상태에 따라 적절한 최적화 방식을 할당하는 SRPO (Sample-Routed Policy Optimization)를 제안합니다 . SRPO는 정답 샘플에 대해서는 GRPO의 보상 정렬(reward-aligned) 강화를 적용하고, 오류 샘플 중 피드백 정보가 가용한 경우에는 SDPO의 정밀한 logit 수준 교정을 적용합니다.#Review#RLVR#GRPO#SDPO#Sample Routing#Policy Optimization#Self-Distillation2026년 4월 6일댓글 수 로딩 중
[논문리뷰] FIPO: Eliciting Deep Reasoning with Future-KL Influenced Policy OptimizationarXiv에 게시된 'FIPO: Eliciting Deep Reasoning with Future-KL Influenced Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Future-KL#Policy Optimization#GRPO#Chain-of-Thought#Credit Assignment2026년 3월 31일댓글 수 로딩 중
[논문리뷰] Hindsight Credit Assignment for Long-Horizon LLM AgentsYi Wen이 arXiv에 게시한 'Hindsight Credit Assignment for Long-Horizon LLM Agents' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Credit Assignment#Hindsight Credit Assignment#Policy Optimization#Sparse Rewards#Long-Horizon Tasks#Generative Verification2026년 3월 11일댓글 수 로딩 중
[논문리뷰] CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVRJiajun Song이 arXiv에 게시한 'CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Verifiable Rewards (RLVR)#Contrastive Learning (CL)#Policy Optimization#Large Language Models (LLMs)#Generalization#Robustness#Reasoning Tasks2026년 3월 11일댓글 수 로딩 중
[논문리뷰] Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable RewardsarXiv에 게시된 'Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Calibration#Over-confidence#Decoupled Optimization#Verifiable Rewards#Policy Optimization#Expected Calibration Error2026년 3월 10일댓글 수 로딩 중
[논문리뷰] BandPO: Bridging Trust Regions and Ratio Clipping via Probability-Aware Bounds for LLM Reinforcement LearningarXiv에 게시된 'BandPO: Bridging Trust Regions and Ratio Clipping via Probability-Aware Bounds for LLM Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reinforcement Learning#Trust Region#Policy Optimization#Ratio Clipping#f-divergence#Entropy Regularization#Exploration#BandPO2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Heterogeneous Agent Collaborative Reinforcement LearningarXiv에 게시된 'Heterogeneous Agent Collaborative Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Multi-Agent Systems#Policy Optimization#Heterogeneous Agents#Sample Efficiency#Knowledge Transfer#RLVR2026년 3월 4일댓글 수 로딩 중
[논문리뷰] InfoPO: Information-Driven Policy Optimization for User-Centric AgentsYuyu Luo이 arXiv에 게시한 'InfoPO: Information-Driven Policy Optimization for User-Centric Agents' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Policy Optimization#Information Gain#Credit Assignment#Multi-turn Interaction#User-centric Agents#Counterfactual Reasoning2026년 3월 3일댓글 수 로딩 중
[논문리뷰] ARLArena: A Unified Framework for Stable Agentic Reinforcement LearningarXiv에 게시된 'ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#LLM#Policy Optimization#Training Stability#Importance Sampling Clipping#Advantage Design#Dynamic Filtering#ARLArena#SAMPO2026년 2월 25일댓글 수 로딩 중
[논문리뷰] DSDR: Dual-Scale Diversity Regularization for Exploration in LLM ReasoningDonghao Zhou이 arXiv에 게시한 'DSDR: Dual-Scale Diversity Regularization for Exploration in LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLM)#Reinforcement Learning with Verifiers (RLVR)#Exploration#Diversity Regularization#Dual-Scale#Reasoning#Policy Optimization2026년 2월 23일댓글 수 로딩 중
[논문리뷰] STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious TokensZhilong Zheng이 arXiv에 게시한 'STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Training Stability#Policy Optimization#Spurious Tokens#Entropy Regularization#Gradient Modulation2026년 2월 17일댓글 수 로딩 중
[논문리뷰] Experiential Reinforcement LearningarXiv에 게시된 'Experiential Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Language Models#Self-Reflection#Experiential Learning#Policy Optimization#Distillation#Agentic Reasoning2026년 2월 16일댓글 수 로딩 중
[논문리뷰] Online Causal Kalman Filtering for Stable and Effective Policy OptimizationarXiv에 게시된 'Online Causal Kalman Filtering for Stable and Effective Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Optimization#Importance Sampling (IS) Ratio#Kalman Filter#Variance Reduction#Math Reasoning2026년 2월 11일댓글 수 로딩 중
[논문리뷰] On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language ModelsYanxi Chen이 arXiv에 게시한 'On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Fine-Tuning (RFT)#Large Language Models (LLMs)#Entropy Dynamics#Exploration-Exploitation#Policy Optimization#GRPO#Entropy Control#Discriminator Score2026년 2월 8일댓글 수 로딩 중
[논문리뷰] F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the RarearXiv에 게시된 'F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM#Policy Optimization#Reward Models#Diversity Preservation#Focal Loss#Group Sampling#Mathematical Reasoning2026년 2월 8일댓글 수 로딩 중
[논문리뷰] Multi-Task GRPO: Reliable LLM Reasoning Across TasksZhiyong Wang이 arXiv에 게시한 'Multi-Task GRPO: Reliable LLM Reasoning Across Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Multi-Task Learning#Reinforcement Learning#Policy Optimization#GRPO#Task Reweighting#Robustness#Reasoning Benchmarks2026년 2월 5일댓글 수 로딩 중
[논문리뷰] Length-Unbiased Sequence Policy Optimization: Revealing and Controlling Response Length Variation in RLVRZhixiong Zeng이 arXiv에 게시한 'Length-Unbiased Sequence Policy Optimization: Revealing and Controlling Response Length Variation in RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning with Verifiable Rewards#LLMs#Policy Optimization#Response Length Bias#Sequence-level Clipping#Length-Unbiased Optimization#Multimodal Reasoning2026년 2월 5일댓글 수 로딩 중
[논문리뷰] LatentMem: Customizing Latent Memory for Multi-Agent SystemsZefeng He이 arXiv에 게시한 'LatentMem: Customizing Latent Memory for Multi-Agent Systems' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent Systems#LLM Memory#Latent Representation#Role-Aware#Token Efficiency#Policy Optimization#Continual Adaptation2026년 2월 5일댓글 수 로딩 중
[논문리뷰] Self-Hinting Language Models Enhance Reinforcement LearningarXiv에 게시된 'Self-Hinting Language Models Enhance Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#GRPO#Sparse Rewards#Self-Hinting#Policy Optimization#Adaptive Curriculum#On-Policy Training2026년 2월 4일댓글 수 로딩 중
[논문리뷰] Rethinking the Trust Region in LLM Reinforcement LearningarXiv에 게시된 'Rethinking the Trust Region in LLM Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Reinforcement Learning#Trust Region#PPO#DPPO#Policy Optimization#Training Stability#Divergence Approximation2026년 2월 4일댓글 수 로딩 중
[논문리뷰] Less Noise, More Voice: Reinforcement Learning for Reasoning via Instruction PurificationarXiv에 게시된 'Less Noise, More Voice: Reinforcement Learning for Reasoning via Instruction Purification' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Instruction Purification#Interference Tokens#Sample Efficiency#Policy Optimization#Verifiable Rewards2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Spark: Strategic Policy-Aware Exploration via Dynamic Branching for Long-Horizon Agentic LearningShuai Zhang이 arXiv에 게시한 'Spark: Strategic Policy-Aware Exploration via Dynamic Branching for Long-Horizon Agentic Learning' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Reinforcement Learning#Long-Horizon Tasks#Dynamic Branching#Strategic Exploration#LLM Agents#Sample Efficiency#Policy Optimization2026년 1월 28일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning via Self-DistillationarXiv에 게시된 'Reinforcement Learning via Self-Distillation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Self-Distillation#Large Language Models (LLMs)#Rich Feedback#Credit Assignment#Policy Optimization#RLHF#Code Generation#Test-Time Training2026년 1월 28일댓글 수 로딩 중
[논문리뷰] Harder Is Better: Boosting Mathematical Reasoning via Difficulty-Aware GRPO and Multi-Aspect Question ReformulationarXiv에 게시된 'Harder Is Better: Boosting Mathematical Reasoning via Difficulty-Aware GRPO and Multi-Aspect Question Reformulation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Mathematical Reasoning#Difficulty-Aware Optimization#Data Augmentation#Policy Optimization#LLMs#GRPO#MQR2026년 1월 28일댓글 수 로딩 중
[논문리뷰] GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL OptimizationarXiv에 게시된 'GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multi-reward RL#Policy Optimization#Reward Normalization#GRPO#GDPO#LLMs#Training Stability2026년 1월 8일댓글 수 로딩 중
[논문리뷰] AT^2PO: Agentic Turn-based Policy Optimization via Tree SearcharXiv에 게시된 'AT^2PO: Agentic Turn-based Policy Optimization via Tree Search' 논문에 대한 자세한 리뷰입니다.#Review#Agentic RL#Multi-turn Tasks#Policy Optimization#Tree Search#Credit Assignment#Exploration Diversity#LLM Agents2026년 1월 8일댓글 수 로딩 중
[논문리뷰] Let It Flow: Agentic Crafting on Rock and Roll, Building the ROME Model within an Open Agentic Learning EcosystemWei Gao이 arXiv에 게시한 'Let It Flow: Agentic Crafting on Rock and Roll, Building the ROME Model within an Open Agentic Learning Ecosystem' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Learning Ecosystem#Large Language Models#Reinforcement Learning#Agentic Crafting#Tool Use#ROME Model#Policy Optimization#Sandbox Environment2025년 12월 31일댓글 수 로딩 중
[논문리뷰] Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal PoliciesarXiv에 게시된 'Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Policy Optimization#Interpretability#Transformer#Internal Policy#Entropy Analysis2025년 12월 23일댓글 수 로딩 중
[논문리뷰] Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement LearningarXiv에 게시된 'Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Parallel Reasoning#Self-Distilled Reinforcement Learning#Policy Optimization#Inference Acceleration#Structured Output#Agentic Reasoning2025년 12월 8일댓글 수 로딩 중
[논문리뷰] From Imitation to Discrimination: Toward A Generalized Curriculum Advantage Mechanism Enhancing Cross-Domain Reasoning TasksYang Li이 arXiv에 게시한 'From Imitation to Discrimination: Toward A Generalized Curriculum Advantage Mechanism Enhancing Cross-Domain Reasoning Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Curriculum Learning#Advantage Function#Reasoning Tasks#Multimodal AI#Policy Optimization#Generalization2025년 12월 7일댓글 수 로딩 중
[논문리뷰] Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement LearningZijia Lin이 arXiv에 게시한 'Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Policy Optimization#Trust Region#Entropy Clipping#Large Language Models#Training Stability#Distributional Shift2025년 12월 7일댓글 수 로딩 중
[논문리뷰] SeeNav-Agent: Enhancing Vision-Language Navigation with Visual Prompt and Step-Level Policy OptimizationarXiv에 게시된 'SeeNav-Agent: Enhancing Vision-Language Navigation with Visual Prompt and Step-Level Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Navigation#Large Vision-Language Models#Visual Prompt#Reinforcement Fine-Tuning#Policy Optimization#Embodied AI#Spatial Reasoning#Perception Errors2025년 12월 4일댓글 수 로딩 중
[논문리뷰] CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy OptimizationarXiv에 게시된 'CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Agentic Reasoning#Tool Use#Reinforcement Learning#Faithfulness Evaluation#Policy Optimization#Visual Search#Code Generation2025년 12월 2일댓글 수 로딩 중
[논문리뷰] HiconAgent: History Context-aware Policy Optimization for GUI AgentsKaiwen Zhou이 arXiv에 게시한 'HiconAgent: History Context-aware Policy Optimization for GUI Agents' 논문에 대한 자세한 리뷰입니다.#Review#GUI Agents#Reinforcement Learning#Context-aware#History Compression#Policy Optimization#Multimodal LLM#Dynamic Sampling2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Soft Adaptive Policy OptimizationarXiv에 게시된 'Soft Adaptive Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Policy Optimization#Importance Ratios#Soft Clipping#Trust Region#Mixture-of-Experts#Asymmetric Temperature2025년 11월 25일댓글 수 로딩 중
[논문리뷰] Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMsarXiv에 게시된 'Scaling Agentic Reinforcement Learning for Tool-Integrated Reasoning in VLMs' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Reinforcement Learning (RL)#Tool-Integrated Reasoning (TIR)#Agentic AI#VQA#Training Environment#Behavioral Cloning#Policy Optimization2025년 11월 25일댓글 수 로딩 중
[논문리뷰] VIDEOP2R: Video Understanding from Perception to ReasoningarXiv에 게시된 'VIDEOP2R: Video Understanding from Perception to Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Video Understanding#Reinforcement Fine-Tuning (RFT)#Large Video Language Models (LVLMs)#Perception and Reasoning#Chain-of-Thought (CoT)#Process-Aware Learning#Policy Optimization#Credit Assignment2025년 11월 18일댓글 수 로딩 중
[논문리뷰] Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement LearningYucong Luo이 arXiv에 게시한 'Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Markov Decision Process#Tool Use#Multi-turn Interaction#Policy Optimization#Reward Shaping#Agent Framework2025년 11월 18일댓글 수 로딩 중
[논문리뷰] SafeGRPO: Self-Rewarded Multimodal Safety Alignment via Rule-Governed Policy OptimizationBo Du이 arXiv에 게시한 'SafeGRPO: Self-Rewarded Multimodal Safety Alignment via Rule-Governed Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Safety Alignment#Rule-Governed RL#Self-Rewarded Learning#MLLM Safety#Policy Optimization#Safety Benchmarking#Compositional Robustness2025년 11월 17일댓글 수 로딩 중
[논문리뷰] WMPO: World Model-based Policy Optimization for Vision-Language-Action ModelsarXiv에 게시된 'WMPO: World Model-based Policy Optimization for Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Reinforcement Learning (RL)#Model-based RL#World Models#Policy Optimization#Robotics#Sample Efficiency#Self-correction2025년 11월 12일댓글 수 로딩 중
[논문리뷰] SofT-GRPO: Surpassing Discrete-Token LLM Reinforcement Learning via Gumbel-Reparameterized Soft-Thinking Policy OptimizationarXiv에 게시된 'SofT-GRPO: Surpassing Discrete-Token LLM Reinforcement Learning via Gumbel-Reparameterized Soft-Thinking Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Reinforcement Learning#Soft-Thinking#Gumbel Reparameterization#Policy Optimization#Chain-of-Thought (CoT)#GRPO2025년 11월 10일댓글 수 로딩 중
[논문리뷰] π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action ModelsarXiv에 게시된 'π_RL: Online RL Fine-tuning for Flow-based Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Vision-Language-Action Models (VLAs)#Flow-based Models#Policy Optimization#Robotics#Flow Matching#SDE#MDP2025년 11월 9일댓글 수 로딩 중
[논문리뷰] PORTool: Tool-Use LLM Training with Rewarded TreearXiv에 게시된 'PORTool: Tool-Use LLM Training with Rewarded Tree' 논문에 대한 자세한 리뷰입니다.#Review#Tool-Use LLM#Reinforcement Learning (RL)#Policy Optimization#Rewarded Tree#Trajectory Optimization#Agentic System#Dynamic Tool Call2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Reasoning-Aware GRPO using Process MiningarXiv에 게시된 'Reasoning-Aware GRPO using Process Mining' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Process Mining#Policy Optimization#Mathematical Reasoning#GRPO#PM4GRPO2025년 10월 30일댓글 수 로딩 중
[논문리뷰] FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable ReasoningXin Liu이 arXiv에 게시한 'FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reasoning#Policy Optimization#Reward Modeling#Flawed Reasoning#Reliable AI#Error Detection2025년 10월 30일댓글 수 로딩 중
[논문리뷰] Repurposing Synthetic Data for Fine-grained Search Agent SupervisionarXiv에 게시된 'Repurposing Synthetic Data for Fine-grained Search Agent Supervision' 논문에 대한 자세한 리뷰입니다.#Review#Search Agents#LLM#Reinforcement Learning#Synthetic Data#Reward Shaping#Entity-aware Reward#Policy Optimization#Knowledge-intensive Tasks2025년 10월 29일댓글 수 로딩 중
[논문리뷰] BAPO: Stabilizing Off-Policy Reinforcement Learning for LLMs via Balanced Policy Optimization with Adaptive ClippingJunrui Shen이 arXiv에 게시한 'BAPO: Stabilizing Off-Policy Reinforcement Learning for LLMs via Balanced Policy Optimization with Adaptive Clipping' 논문에 대한 자세한 리뷰입니다.#Review#Off-Policy Reinforcement Learning#Large Language Models#Adaptive Clipping#Policy Optimization#PPO#Entropy Preservation#RL Stabilization2025년 10월 23일댓글 수 로딩 중
[논문리뷰] Uniworld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit FeedbackarXiv에 게시된 'Uniworld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit Feedback' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Diffusion Models#Reinforcement Learning#MLLM#Policy Optimization#Finetuning#Reward Modeling#Human Alignment2025년 10월 21일댓글 수 로딩 중
[논문리뷰] Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn LLM AgentsarXiv에 게시된 'Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn LLM Agents' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Multi-Turn Interactions#Reward Sparsity#Information Gain#Policy Optimization#Ground-Truth Awareness#Sample Efficiency2025년 10월 17일댓글 수 로딩 중
[논문리뷰] Agentic Entropy-Balanced Policy OptimizationarXiv에 게시된 'Agentic Entropy-Balanced Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#Web Agents#Tool Learning#Entropy Balancing#Policy Optimization#Rollout Strategy#Large Language Models2025년 10월 17일댓글 수 로딩 중
[논문리뷰] Attention Illuminates LLM Reasoning: The Preplan-and-Anchor Rhythm Enables Fine-Grained Policy OptimizationarXiv에 게시된 'Attention Illuminates LLM Reasoning: The Preplan-and-Anchor Rhythm Enables Fine-Grained Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Attention Mechanisms#Reinforcement Learning#Credit Assignment#Policy Optimization#Interpretability#Preplan-and-Anchor Rhythm#Generative Models2025년 10월 16일댓글 수 로딩 중
[논문리뷰] Memory as Action: Autonomous Context Curation for Long-Horizon Agentic TasksXueyuan Lin이 arXiv에 게시한 'Memory as Action: Autonomous Context Curation for Long-Horizon Agentic Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Long-Horizon Tasks#Agentic AI#Context Curation#Working Memory#Reinforcement Learning#Policy Optimization#Large Language Models#Memory-as-Action2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Boundary-Guided Policy Optimization for Memory-efficient RL of Diffusion Large Language ModelsarXiv에 게시된 'Boundary-Guided Policy Optimization for Memory-efficient RL of Diffusion Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Large Language Models#Reinforcement Learning#Memory Efficiency#Monte Carlo Sampling#Log-Likelihood Approximation#Policy Optimization#ELBO2025년 10월 15일댓글 수 로딩 중
[논문리뷰] MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimizationvanilla1116이 arXiv에 게시한 'MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Reflective Reasoning#Long-Chain Reasoning#Benchmark#Policy Optimization#Data Generation#Reinforcement Learning#Backtracking2025년 10월 10일댓글 수 로딩 중
[논문리뷰] GCPO: When Contrast Fails, Go GoldarXiv에 게시된 'GCPO: When Contrast Fails, Go Gold' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs Reasoning#Policy Optimization#Contrastive Learning#Chain of Thought#Reference Answers#Math Reasoning#Gold-Standard Answer2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Multi-Agent Tool-Integrated Policy OptimizationLidong Bing이 arXiv에 게시한 'Multi-Agent Tool-Integrated Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent RL#Tool-Integrated Planning#Large Language Models (LLMs)#Policy Optimization#Credit Assignment#Reinforcement Learning#MATPO2025년 10월 9일댓글 수 로딩 중
[논문리뷰] ASPO: Asymmetric Importance Sampling Policy OptimizationXiu Li이 arXiv에 게시한 'ASPO: Asymmetric Importance Sampling Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Importance Sampling#Policy Optimization#PPO-Clip#Outcome-Supervised RL#Token Weighting#GRPO2025년 10월 8일댓글 수 로딩 중
[논문리뷰] LSPO: Length-aware Dynamic Sampling for Policy Optimization in LLM ReasoningarXiv에 게시된 'LSPO: Length-aware Dynamic Sampling for Policy Optimization in LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#RLVR#Dynamic Sampling#Policy Optimization#Response Length#Meta-RL#Overthinking2025년 10월 6일댓글 수 로딩 중
[논문리뷰] A Practitioner's Guide to Multi-turn Agentic Reinforcement LearningarXiv에 게시된 'A Practitioner's Guide to Multi-turn Agentic Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Reinforcement Learning#LLM Agents#Text-based Environments#Reward Shaping#Policy Optimization#Supervised Fine-tuning (SFT)#Generalization#Environment Complexity2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Test-Time Policy Adaptation for Enhanced Multi-Turn Interactions with LLMsYao Shu이 arXiv에 게시한 'Test-Time Policy Adaptation for Enhanced Multi-Turn Interactions with LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Multi-turn Interaction#Test-Time Adaptation#Reinforcement Learning from Human Feedback#Policy Optimization#Online Learning#Self-Correction2025년 10월 1일댓글 수 로딩 중
[논문리뷰] More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language ModelsFabian Waschkowski이 arXiv에 게시한 'More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Multimodal Reasoning#Reasoning#Visual Forgetting#Perceptual Grounding#Reinforcement Learning#Policy Optimization#Visual Anchors2025년 10월 1일댓글 수 로딩 중
[논문리뷰] SPARK: Synergistic Policy And Reward Co-Evolving FrameworkarXiv에 게시된 'SPARK: Synergistic Policy And Reward Co-Evolving Framework' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs#LVLMs#Reward Modeling#Policy Optimization#Self-Reflection#Verifiable Rewards#Co-evolution2025년 9월 29일댓글 수 로딩 중
[논문리뷰] EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement LearningLi Yu-Jhe이 arXiv에 게시한 'EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Entropy Regularization#Policy Optimization#Sparse Rewards#Multi-turn Environments#Exploration-Exploitation2025년 9월 29일댓글 수 로딩 중
[논문리뷰] VCRL: Variance-based Curriculum Reinforcement Learning for Large Language ModelsYuewei Zhang이 arXiv에 게시한 'VCRL: Variance-based Curriculum Reinforcement Learning for Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Curriculum Learning#Large Language Models#Mathematical Reasoning#Variance-based Sampling#Replay Learning#Policy Optimization2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Tree Search for LLM Agent Reinforcement LearningXiangxiang Chu이 arXiv에 게시한 'Tree Search for LLM Agent Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Tree Search#Policy Optimization#Preference Learning#Sparse Rewards#Multi-turn Tasks2025년 9월 26일댓글 수 로딩 중
[논문리뷰] CE-GPPO: Controlling Entropy via Gradient-Preserving Clipping Policy Optimization in Reinforcement LearningWenping Hu이 arXiv에 게시한 'CE-GPPO: Controlling Entropy via Gradient-Preserving Clipping Policy Optimization in Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Policy Optimization#PPO#Entropy Control#Gradient Clipping#Exploration-Exploitation2025년 9월 26일댓글 수 로딩 중
[논문리뷰] MAPO: Mixed Advantage Policy OptimizationXuankun Rong이 arXiv에 게시한 'MAPO: Mixed Advantage Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Foundation Models#Policy Optimization#Advantage Function#Trajectory Certainty#Multimodal Reasoning#GRPO2025년 9월 24일댓글 수 로딩 중
[논문리뷰] From Uniform to Heterogeneous: Tailoring Policy Optimization to Every Token's NatureBin Cui이 arXiv에 게시한 'From Uniform to Heterogeneous: Tailoring Policy Optimization to Every Token's Nature' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs#Policy Optimization#Token Heterogeneity#Adaptive Sampling#Advantage Redistribution#Asymmetric Clipping#Entropy-based RL2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Inpainting-Guided Policy Optimization for Diffusion Large Language ModelsChenyu Wang이 arXiv에 게시한 'Inpainting-Guided Policy Optimization for Diffusion Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion LLMs#Reinforcement Learning#Inpainting#Policy Optimization#Exploration#Mathematical Reasoning#GRPO2025년 9월 15일댓글 수 로딩 중
[논문리뷰] A Survey of Reinforcement Learning for Large Reasoning ModelsRunze Liu이 arXiv에 게시한 'A Survey of Reinforcement Learning for Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Reasoning Models#LLMs#Reward Design#Policy Optimization#Verifiable Rewards#Agentic AI#Multimodal AI2025년 9월 11일댓글 수 로딩 중
[논문리뷰] Staying in the Sweet Spot: Responsive Reasoning Evolution via Capability-Adaptive Hint ScaffoldingYongcheng Zeng이 arXiv에 게시한 'Staying in the Sweet Spot: Responsive Reasoning Evolution via Capability-Adaptive Hint Scaffolding' 논문에 대한 자세한 리뷰입니다.#Review#RLVR#LLM Reasoning#Adaptive Learning#Hint Scaffolding#Item Response Theory#Exploration Efficiency#Problem Difficulty#Policy Optimization2025년 9월 10일댓글 수 로딩 중
[논문리뷰] Bootstrapping Task Spaces for Self-ImprovementYoram Bachrach이 arXiv에 게시한 'Bootstrapping Task Spaces for Self-Improvement' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Self-Improvement#Autocurriculum#Task-Space Exploration#Inference-Time Iteration#Policy Optimization2025년 9월 8일댓글 수 로딩 중
[논문리뷰] The Landscape of Agentic Reinforcement Learning for LLMs: A SurveyHejia Geng이 arXiv에 게시한 'The Landscape of Agentic Reinforcement Learning for LLMs: A Survey' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#Large Language Models#LLM Agents#Sequential Decision Making#Policy Optimization#Tool Use#Dynamic Environments#Autonomous AI2025년 9월 3일댓글 수 로딩 중
[논문리뷰] Implicit Actor Critic Coupling via a Supervised Learning Framework for RLVRLu Wang이 arXiv에 게시한 'Implicit Actor Critic Coupling via a Supervised Learning Framework for RLVR' 논문에 대한 자세한 리뷰입니다.#Review#RLVR#Large Language Models#Actor-Critic#Supervised Learning#Mathematical Reasoning#Policy Optimization#Cross-Entropy Loss2025년 9월 3일댓글 수 로딩 중
[논문리뷰] DCPO: Dynamic Clipping Policy OptimizationKai Lu이 arXiv에 게시한 'DCPO: Dynamic Clipping Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM#Policy Optimization#Dynamic Clipping#Advantage Standardization#RLVR#Reasoning2025년 9월 3일댓글 수 로딩 중
[논문리뷰] PVPO: Pre-Estimated Value-Based Policy Optimization for Agentic ReasoningYuewei Zhang이 arXiv에 게시한 'PVPO: Pre-Estimated Value-Based Policy Optimization for Agentic Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Critic-Free RL#Agentic Reasoning#Policy Optimization#Large Language Models (LLMs)#Advantage Estimation#Group Sampling#Static Value Estimation2025년 9월 2일댓글 수 로딩 중
[논문리뷰] TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based ModelingZhoufutu Wen이 arXiv에 게시한 'TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Policy Optimization#Large Language Models#Inference Efficiency#Tree Search#Segment-level Decoding#Advantage Estimation#Reasoning2025년 8월 27일댓글 수 로딩 중
[논문리뷰] Breaking the Exploration Bottleneck: Rubric-Scaffolded Reinforcement Learning for General LLM ReasoningJiale Zhao이 arXiv에 게시한 'Breaking the Exploration Bottleneck: Rubric-Scaffolded Reinforcement Learning for General LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Exploration Bottleneck#Instructional Scaffolding#Rubric-based Rewards#General Reasoning#RL with Verifiable Rewards#Policy Optimization2025년 8월 26일댓글 수 로딩 중
[논문리뷰] Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning ModelsQinghao Ye이 arXiv에 게시한 'Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Exploration-Exploitation#Reward Design#Reasoning Tasks#Pass@k#Policy Optimization2025년 8월 15일댓글 수 로딩 중
[논문리뷰] Cooper: Co-Optimizing Policy and Reward Models in Reinforcement Learning for Large Language ModelsGuiyang Hou이 arXiv에 게시한 'Cooper: Co-Optimizing Policy and Reward Models in Reinforcement Learning for Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reward Model#Policy Optimization#Reward Hacking#Hybrid Annotation#Mathematical Reasoning#Verifiable Rewards2025년 8월 14일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning in Vision: A SurveyQingwei Meng이 arXiv에 게시한 'Reinforcement Learning in Vision: A Survey' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Computer Vision (CV)#Multimodal Large Language Models (MLLMs)#Visual Generation#Vision-Language-Action (VLA) Models#Policy Optimization#Reward Modeling2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Part I: Tricks or Traps? A Deep Dive into RL for LLM ReasoningJiaheng Liu이 arXiv에 게시한 'Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#LLM Reasoning#Policy Optimization#Normalization#Clipping#Loss Aggregation#Overlong Filtering2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy OptimizationGuanting Dong이 arXiv에 게시한 'Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reasoning LLMs#Reinforcement Learning#PPO#Gradient Clipping#Supervised Fine-tuning#Math Reasoning#Code Generation#Policy Optimization2025년 8월 12일댓글 수 로딩 중
[논문리뷰] InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy OptimizationPengxiang Li이 arXiv에 게시한 'InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#GUI Grounding#MLLMs#Reinforcement Learning#Policy Optimization#Exploration Strategy#Semantic Alignment#Adaptive Exploration Reward#Human-Computer Interaction2025년 8월 11일댓글 수 로딩 중