[논문리뷰] CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVRJiajun Song이 arXiv에 게시한 'CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Verifiable Rewards (RLVR)#Contrastive Learning (CL)#Policy Optimization#Large Language Models (LLMs)#Generalization#Robustness#Reasoning Tasks2026년 3월 11일댓글 수 로딩 중
[논문리뷰] Surgical Post-Training: Cutting Errors, Keeping KnowledgearXiv에 게시된 'Surgical Post-Training: Cutting Errors, Keeping Knowledge' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Catastrophic Forgetting#Direct Preference Optimization (DPO)#Reward-based Learning#Data Rectification#Binary Cross-Entropy#Reasoning Tasks#Knowledge Preservation2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Universal Reasoning ModelarXiv에 게시된 'Universal Reasoning Model' 논문에 대한 자세한 리뷰입니다.#Review#Universal Transformer#Recurrent Neural Networks#ARC-AGI#Reasoning Tasks#Nonlinearity#Convolutional Gating#Truncated Backpropagation#Model Efficiency2025년 12월 17일댓글 수 로딩 중
[논문리뷰] From Imitation to Discrimination: Toward A Generalized Curriculum Advantage Mechanism Enhancing Cross-Domain Reasoning TasksYang Li이 arXiv에 게시한 'From Imitation to Discrimination: Toward A Generalized Curriculum Advantage Mechanism Enhancing Cross-Domain Reasoning Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Curriculum Learning#Advantage Function#Reasoning Tasks#Multimodal AI#Policy Optimization#Generalization2025년 12월 7일댓글 수 로딩 중
[논문리뷰] Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling WallSungjin Ahn이 arXiv에 게시한 'Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling Wall' 논문에 대한 자세한 리뷰입니다.#Review#Discrete Diffusion Models#Sampling Wall#Loopholing#Self-Conditioning#Non-Autoregressive Generation#Text Generation#Language Modeling#Reasoning Tasks2025년 10월 24일댓글 수 로딩 중
[논문리뷰] Don't Waste Mistakes: Leveraging Negative RL-Groups via Confidence ReweightingJulia Kempe이 arXiv에 게시한 'Don't Waste Mistakes: Leveraging Negative RL-Groups via Confidence Reweighting' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reasoning Tasks#GRPO#Negative Samples#Reward Modeling#Confidence Reweighting#Mathematical Reasoning2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Self-Reflective Generation at Test TimeShuang Qiu이 arXiv에 게시한 'Self-Reflective Generation at Test Time' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Self-Reflection#Test-Time Optimization#Uncertainty Monitoring#Proactive Error Prevention#Reasoning Tasks#Chain-of-Thought2025년 10월 7일댓글 수 로딩 중
[논문리뷰] In-Place Feedback: A New Paradigm for Guiding LLMs in Multi-Turn ReasoningChaehyeon Chung이 arXiv에 게시한 'In-Place Feedback: A New Paradigm for Guiding LLMs in Multi-Turn Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Feedback#Multi-turn Reasoning#In-place Editing#Token Efficiency#Error Correction#Human-AI Interaction#Reasoning Tasks2025년 10월 2일댓글 수 로딩 중
[논문리뷰] Revolutionizing Reinforcement Learning Framework for Diffusion Large Language ModelsKe Shen이 arXiv에 게시한 'Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Language Models#Reinforcement Learning#Trajectory-aware RL#Value Model#Masked Diffusion Models#Large Language Models#Reasoning Tasks#Code Generation2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning TasksDaisuke Nohara이 arXiv에 게시한 'Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Sparsity#Scaling Laws#Reasoning Tasks#Memorization#Large Language Models#Generalization Gap#Top-k Routing2025년 8월 27일댓글 수 로딩 중
[논문리뷰] MEENA (PersianMMMU): Multimodal-Multilingual Educational Exams for N-level AssessmentDoratossadat Dastgheib이 arXiv에 게시한 'MEENA (PersianMMMU): Multimodal-Multilingual Educational Exams for N-level Assessment' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Language Models#Multilingual Benchmarking#Persian Language#Educational Assessment#Vision-Language Models#Cultural Nuance#Reasoning Tasks2025년 8월 26일댓글 수 로딩 중
[논문리뷰] Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning ModelsQinghao Ye이 arXiv에 게시한 'Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Exploration-Exploitation#Reward Design#Reasoning Tasks#Pass@k#Policy Optimization2025년 8월 15일댓글 수 로딩 중
[논문리뷰] Less Is More: Training-Free Sparse Attention with Global Locality for Efficient ReasoningBaihong Yuan이 arXiv에 게시한 'Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Sparse Attention#LLMs#Reasoning Tasks#Efficiency#Training-Free#Global Locality#KV Cache Optimization2025년 8월 12일댓글 수 로딩 중