[논문리뷰] Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMsJiajun Wu이 arXiv에 게시한 'Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Embodied LLMs#Test-Time Adaptation#Reflection-in-Action#Reflection-on-Action#Robotics#Long-Horizon Planning#Policy Gradient#Self-Supervised Learning2026년 2월 24일댓글 수 로딩 중
[논문리뷰] VESPO: Variational Sequence-Level Soft Policy Optimization for Stable Off-Policy LLM TrainingarXiv에 게시된 'VESPO: Variational Sequence-Level Soft Policy Optimization for Stable Off-Policy LLM Training' 논문에 대한 자세한 리뷰입니다.#Review#Off-Policy RL#LLM Training#Importance Sampling#Variance Reduction#Variational Optimization#Policy Gradient#Sequence-Level Optimization#Reinforcement Learning2026년 2월 22일댓글 수 로딩 중
[논문리뷰] LaViDa-R1: Advancing Reasoning for Unified Multimodal Diffusion Language ModelsarXiv에 게시된 'LaViDa-R1: Advancing Reasoning for Unified Multimodal Diffusion Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Diffusion Models#Reasoning#Reinforcement Learning#Supervised Finetuning#Visual Question Answering#Image Editing#Object Grounding#Policy Gradient2026년 2월 16일댓글 수 로딩 중
[논문리뷰] Reinforced Attention LearningarXiv에 게시된 'Reinforced Attention Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Multimodal LLMs#Attention Mechanisms#Policy Gradient#Knowledge Distillation#Visual Grounding#Post-training2026년 2월 5일댓글 수 로딩 중
[논문리뷰] Diversity or Precision? A Deep Dive into Next Token PredictionarXiv에 게시된 'Diversity or Precision? A Deep Dive into Next Token Prediction' 논문에 대한 자세한 리뷰입니다.#Review#Next Token Prediction#Reinforcement Learning#Large Language Models#Reward Shaping#Pre-training Objective#Policy Gradient#Exploration-Exploitation2026년 1월 4일댓글 수 로딩 중
[논문리뷰] Beyond Token-level Supervision: Unlocking the Potential of Decoding-based Regression via Reinforcement LearningJiacheng Chen이 arXiv에 게시한 'Beyond Token-level Supervision: Unlocking the Potential of Decoding-based Regression via Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Decoding-based Regression#Reinforcement Learning#Numerical Prediction#Large Language Models#Policy Gradient#Tokenization#Sequence Generation2025년 12월 8일댓글 수 로딩 중
[논문리뷰] Stabilizing Reinforcement Learning with LLMs: Formulation and PracticesarXiv에 게시된 'Stabilizing Reinforcement Learning with LLMs: Formulation and Practices' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Gradient#REINFORCE#Mixture-of-Experts (MoE)#Training Stability#Importance Sampling#Routing Replay#Off-policy Learning2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offsFranck Dernoncourt이 arXiv에 게시한 'Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Routing#Contextual Bandits#Bandit Feedback#Multi-objective Optimization#Preference-tuning#Policy Gradient#Cost-efficiency2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM TrainingarXiv에 게시된 'Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Adaptive Sampling#Policy Gradient#Reward Optimization#Signal Collapse#Variance Reduction2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical PerspectivearXiv에 게시된 'Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Planning#Policy Gradient#Q-learning#Supervised Fine-Tuning#Diversity Collapse#Reward Hacking2025년 10월 1일댓글 수 로딩 중
[논문리뷰] Single-stream Policy OptimizationZihan Ding이 arXiv에 게시한 'Single-stream Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Optimization#Policy Gradient#Variance Reduction#Adaptive Sampling#Scalability#Agentic Systems#RLVR2025년 9월 17일댓글 수 로딩 중
[논문리뷰] ΔL Normalization: Rethink Loss Aggregation in RLVRLili Qiu이 arXiv에 게시한 'ΔL Normalization: Rethink Loss Aggregation in RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs#Gradient Variance#Loss Aggregation#Unbiased Estimator#RLVR#Policy Gradient#Normalization2025년 9월 10일댓글 수 로딩 중
[논문리뷰] Towards a Unified View of Large Language Model Post-TrainingHongyi Liu이 arXiv에 게시한 'Towards a Unified View of Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Post-Training#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Policy Gradient#Unified Framework#Hybrid Algorithms#Bias-Variance Tradeoff2025년 9월 5일댓글 수 로딩 중
[논문리뷰] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward RectificationXinyu Ye이 arXiv에 게시한 'On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Generalization#Reward Rectification#Dynamic Fine-Tuning (DFT)#LLM#Policy Gradient#Mathematical Reasoning2025년 8월 8일댓글 수 로딩 중