[논문리뷰] A Subgoal-driven Framework for Improving Long-Horizon LLM AgentsarXiv에 게시된 'A Subgoal-driven Framework for Improving Long-Horizon LLM Agents' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Subgoals#Reinforcement Learning#Web Navigation#Long-Horizon Planning#Reward Shaping#Curriculum Learning2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Tool-R0: Self-Evolving LLM Agents for Tool-Learning from Zero DataarXiv에 게시된 'Tool-R0: Self-Evolving LLM Agents for Tool-Learning from Zero Data' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Self-Play Reinforcement Learning (RL)#Tool-Learning#Zero-Data Learning#LLM Agents#Curriculum Learning#Reward Shaping#Co-evolution2026년 3월 2일댓글 수 로딩 중
[논문리뷰] The Art of Efficient Reasoning: Data, Reward, and OptimizationarXiv에 게시된 'The Art of Efficient Reasoning: Data, Reward, and Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Efficient Reasoning#Large Language Models#Reinforcement Learning#Reward Shaping#Chain-of-Thought#RL Optimization#Length Adaptation2026년 2월 24일댓글 수 로딩 중
[논문리뷰] Think Longer to Explore Deeper: Learn to Explore In-Context via Length-Incentivized Reinforcement LearningarXiv에 게시된 'Think Longer to Explore Deeper: Learn to Explore In-Context via Length-Incentivized Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#In-Context Learning#Reinforcement Learning#Test-Time Scaling#Exploration-Exploitation#State Coverage#Reward Shaping#Chain-of-Thought2026년 2월 12일댓글 수 로딩 중
[논문리뷰] SSL: Sweet Spot Learning for Differentiated Guidance in Agentic OptimizationBolin Ni이 arXiv에 게시한 'SSL: Sweet Spot Learning for Differentiated Guidance in Agentic Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Reward Shaping#Agent Optimization#GUI Automation#Complex Reasoning#Sample Efficiency#Tiered Rewards2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Continual GUI AgentsarXiv에 게시된 'Continual GUI Agents' 논문에 대한 자세한 리뷰입니다.#Review#Continual Learning#GUI Agents#Reinforcement Learning#Grounding#Domain Adaptation#Resolution Adaptation#Reward Shaping#Human-Computer Interaction2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Diversity or Precision? A Deep Dive into Next Token PredictionarXiv에 게시된 'Diversity or Precision? A Deep Dive into Next Token Prediction' 논문에 대한 자세한 리뷰입니다.#Review#Next Token Prediction#Reinforcement Learning#Large Language Models#Reward Shaping#Pre-training Objective#Policy Gradient#Exploration-Exploitation2026년 1월 4일댓글 수 로딩 중
[논문리뷰] SmartSnap: Proactive Evidence Seeking for Self-Verifying AgentsarXiv에 게시된 'SmartSnap: Proactive Evidence Seeking for Self-Verifying Agents' 논문에 대한 자세한 리뷰입니다.#Review#Agentic RL#Self-Verifying Agents#GUI Automation#Evidence Curation#LLM-as-a-Judge#Reward Shaping#AndroidLab2025년 12월 29일댓글 수 로딩 중
[논문리뷰] Can LLMs Guide Their Own Exploration? Gradient-Guided Reinforcement Learning for LLM ReasoningarXiv에 게시된 'Can LLMs Guide Their Own Exploration? Gradient-Guided Reinforcement Learning for LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Exploration Strategy#Gradient-Guided#Reward Shaping#Reasoning#PPO2025년 12월 17일댓글 수 로딩 중
[논문리뷰] MOA: Multi-Objective Alignment for Role-Playing AgentsYongbin Li이 arXiv에 게시한 'MOA: Multi-Objective Alignment for Role-Playing Agents' 논문에 대한 자세한 리뷰입니다.#Review#Role-Playing Agents#Multi-Objective Reinforcement Learning#LLM Alignment#Persona Consistency#Dialogue Generation#Reward Shaping#Off-Policy Guidance2025년 12월 11일댓글 수 로딩 중
[논문리뷰] SRPO: Self-Referential Policy Optimization for Vision-Language-Action ModelsarXiv에 게시된 'SRPO: Self-Referential Policy Optimization for Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Vision-Language-Action Models#Reward Shaping#World Models#Self-Referential Learning#Robotics#Trajectory Optimization2025년 11월 20일댓글 수 로딩 중
[논문리뷰] Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement LearningYucong Luo이 arXiv에 게시한 'Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Reinforcement Learning#Markov Decision Process#Tool Use#Multi-turn Interaction#Policy Optimization#Reward Shaping#Agent Framework2025년 11월 18일댓글 수 로딩 중
[논문리뷰] MarsRL: Advancing Multi-Agent Reasoning System via Reinforcement Learning with Agentic Pipeline ParallelismarXiv에 게시된 'MarsRL: Advancing Multi-Agent Reasoning System via Reinforcement Learning with Agentic Pipeline Parallelism' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent Systems#Reinforcement Learning#LLMs#Pipeline Parallelism#Reasoning#Reward Shaping#Agentic AI2025년 11월 16일댓글 수 로딩 중
[논문리뷰] Rubric-Based Benchmarking and Reinforcement Learning for Advancing LLM Instruction FollowingKarishma Mandyam이 arXiv에 게시한 'Rubric-Based Benchmarking and Reinforcement Learning for Advancing LLM Instruction Following' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Instruction Following#Reinforcement Learning#Rubric-based Evaluation#Benchmarking#Reward Shaping#Rubric Verifier#AdvancedIF2025년 11월 13일댓글 수 로딩 중
[논문리뷰] OpenSIR: Open-Ended Self-Improving ReasonerarXiv에 게시된 'OpenSIR: Open-Ended Self-Improving Reasoner' 논문에 대한 자세한 리뷰입니다.#Review#Open-Ended Learning#Self-Play#Reinforcement Learning#Large Language Models#Mathematical Reasoning#Problem Generation#Curriculum Learning#Reward Shaping2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Rank-GRPO: Training LLM-based Conversational Recommender Systems with Reinforcement LearningarXiv에 게시된 'Rank-GRPO: Training LLM-based Conversational Recommender Systems with Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Conversational Recommender Systems#Large Language Models#Reinforcement Learning#Group Relative Policy Optimization#Rank-based Learning#Supervised Fine-tuning#Reward Shaping2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Supervised Reinforcement Learning: From Expert Trajectories to Step-wise ReasoningarXiv에 게시된 'Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Reinforcement Learning#LLMs#Multi-step Reasoning#Reward Shaping#Expert Trajectories#Math Reasoning#Agentic AI2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Repurposing Synthetic Data for Fine-grained Search Agent SupervisionarXiv에 게시된 'Repurposing Synthetic Data for Fine-grained Search Agent Supervision' 논문에 대한 자세한 리뷰입니다.#Review#Search Agents#LLM#Reinforcement Learning#Synthetic Data#Reward Shaping#Entity-aware Reward#Policy Optimization#Knowledge-intensive Tasks2025년 10월 29일댓글 수 로딩 중
[논문리뷰] Every Question Has Its Own Value: Reinforcement Learning with Explicit Human ValuesarXiv에 게시된 'Every Question Has Its Own Value: Reinforcement Learning with Explicit Human Values' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Alignment#Human Values#Reward Shaping#Value-Weighted Reward#Termination Policy#RLVR2025년 10월 24일댓글 수 로딩 중
[논문리뷰] Fathom-DeepResearch: Unlocking Long Horizon Information Retrieval and Synthesis for SLMsarXiv에 게시된 'Fathom-DeepResearch: Unlocking Long Horizon Information Retrieval and Synthesis for SLMs' 논문에 대한 자세한 리뷰입니다.#Review#DeepResearch Agents#Tool-integrated Reasoning#Reinforcement Learning#Information Retrieval#Information Synthesis#Multi-agent Self-play#Reward Shaping#LLM2025년 10월 8일댓글 수 로딩 중
[논문리뷰] A Practitioner's Guide to Multi-turn Agentic Reinforcement LearningarXiv에 게시된 'A Practitioner's Guide to Multi-turn Agentic Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Reinforcement Learning#LLM Agents#Text-based Environments#Reward Shaping#Policy Optimization#Supervised Fine-tuning (SFT)#Generalization#Environment Complexity2025년 10월 6일댓글 수 로딩 중
[논문리뷰] TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMsShaohui Jiao이 arXiv에 게시한 'TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Video LLMs#Temporal Grounding#Reinforcement Learning#Off-policy Learning#Reward Shaping#Chain-of-Thought#Multimodal LLMs2025년 9월 23일댓글 수 로딩 중