[논문리뷰] Learning beyond Teacher: Generalized On-Policy Distillation with Reward ExtrapolationarXiv에 게시된 'Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation' 논문에 대한 자세한 리뷰입니다.#Review#On-Policy Distillation#Reward Extrapolation#Large Language Models (LLMs)#Knowledge Distillation#Reinforcement Learning#Math Reasoning#Code Generation#Multi-teacher Distillation2026년 2월 12일댓글 수 로딩 중
[논문리뷰] Online Causal Kalman Filtering for Stable and Effective Policy OptimizationarXiv에 게시된 'Online Causal Kalman Filtering for Stable and Effective Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Optimization#Importance Sampling (IS) Ratio#Kalman Filter#Variance Reduction#Math Reasoning2026년 2월 11일댓글 수 로딩 중
[논문리뷰] DiRL: An Efficient Post-Training Framework for Diffusion Language ModelsarXiv에 게시된 'DiRL: An Efficient Post-Training Framework for Diffusion Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Language Models#Post-Training#Reinforcement Learning#GRPO#FlexAttention#LMDeploy#Math Reasoning#SFT2025년 12월 29일댓글 수 로딩 중
[논문리뷰] Supervised Reinforcement Learning: From Expert Trajectories to Step-wise ReasoningarXiv에 게시된 'Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Reinforcement Learning#LLMs#Multi-step Reasoning#Reward Shaping#Expert Trajectories#Math Reasoning#Agentic AI2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier MatharXiv에 게시된 'Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math' 논문에 대한 자세한 리뷰입니다.#Review#LLM Verification#Math Reasoning#Step-Level Verification#Benchmark#Open-Ended Problems#Process Reward Models#Generative Critics2025년 10월 16일댓글 수 로딩 중
[논문리뷰] GCPO: When Contrast Fails, Go GoldarXiv에 게시된 'GCPO: When Contrast Fails, Go Gold' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs Reasoning#Policy Optimization#Contrastive Learning#Chain of Thought#Reference Answers#Math Reasoning#Gold-Standard Answer2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Random Policy Valuation is Enough for LLM Reasoning with Verifiable RewardsBinxing Jiao이 arXiv에 게시한 'Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Policy Valuation#Markov Decision Process#Diversity#Math Reasoning#Verifiable Rewards2025년 9월 30일댓글 수 로딩 중
[논문리뷰] No Prompt Left Behind: Exploiting Zero-Variance Prompts in LLM Reinforcement Learning via Entropy-Guided Advantage ShapingarXiv에 게시된 'No Prompt Left Behind: Exploiting Zero-Variance Prompts in LLM Reinforcement Learning via Entropy-Guided Advantage Shaping' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reinforcement Learning#Zero-Variance Prompts#Advantage Shaping#Entropy-Guided#Math Reasoning#RLVR#Group Relative Policy Optimization2025년 9월 29일댓글 수 로딩 중
[논문리뷰] rStar2-Agent: Agentic Reasoning Technical ReportWeijiang Xu이 arXiv에 게시한 'rStar2-Agent: Agentic Reasoning Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#Math Reasoning#Code Interpreter#Tool Use#GRPO-RoC#LLM Training Efficiency#Self-Reflection2025년 8월 29일댓글 수 로딩 중
[논문리뷰] MathReal: We Keep It Real! A Real Scene Benchmark for Evaluating Math Reasoning in Multimodal Large Language ModelsZhihan Zhou이 arXiv에 게시한 'MathReal: We Keep It Real! A Real Scene Benchmark for Evaluating Math Reasoning in Multimodal Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Math Reasoning#Real-World Benchmark#Visual Perception#Robustness#K-12 Education#Dataset2025년 8월 14일댓글 수 로딩 중
[논문리뷰] Aryabhata: An exam-focused language model for JEE MathSandeep Varma이 arXiv에 게시한 'Aryabhata: An exam-focused language model for JEE Math' 논문에 대한 자세한 리뷰입니다.#Review#Language Model#Math Reasoning#JEE#Supervised Fine-Tuning#Reinforcement Learning#Model Merging#Chain-of-Thought#Curriculum Learning2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy OptimizationGuanting Dong이 arXiv에 게시한 'Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reasoning LLMs#Reinforcement Learning#PPO#Gradient Clipping#Supervised Fine-tuning#Math Reasoning#Code Generation#Policy Optimization2025년 8월 12일댓글 수 로딩 중
[논문리뷰] RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy OptimizationKechi Zhang이 arXiv에 게시한 'RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Capability Collapse#Hybrid Policy Optimization#Multiple Importance Sampling#Exploration#Math Reasoning#Out-of-Distribution2025년 8월 7일댓글 수 로딩 중