[논문리뷰] On the Direction of RLVR Updates for LLM Reasoning: Identification and ExploitationarXiv에 게시된 'On the Direction of RLVR Updates for LLM Reasoning: Identification and Exploitation' 논문에 대한 자세한 리뷰입니다.#Review#RLVR#LLM Reasoning#Log Probability Difference#Directional Updates#Test-Time Extrapolation#Advantage Reweighting#Sparse Updates2026년 3월 23일댓글 수 로딩 중
[논문리뷰] CHIMERA: Compact Synthetic Data for Generalizable LLM ReasoningarXiv에 게시된 'CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Synthetic Data#LLM Reasoning#Chain-of-Thought#Data Efficiency#Post-training#Generalization#Quality Control#Domain Coverage2026년 3월 2일댓글 수 로딩 중
[논문리뷰] Unveiling Implicit Advantage Symmetry: Why GRPO Struggles with Exploration and Difficulty AdaptationarXiv에 게시된 'Unveiling Implicit Advantage Symmetry: Why GRPO Struggles with Exploration and Difficulty Adaptation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Group Relative Policy Optimization#Advantage Estimation#Exploration-Exploitation#Curriculum Learning#Multi-modal LLMs2026년 2월 12일댓글 수 로딩 중
[논문리뷰] Fundamental Reasoning Paradigms Induce Out-of-Domain Generalization in Language ModelsMaria Liakata이 arXiv에 게시한 'Fundamental Reasoning Paradigms Induce Out-of-Domain Generalization in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Deduction#Induction#Abduction#Out-of-Domain Generalization#Symbolic Reasoning#Fine-tuning#Upcycling2026년 2월 9일댓글 수 로딩 중
[논문리뷰] Back to Basics: Revisiting Exploration in Reinforcement Learning for LLM Reasoning via Generative ProbabilitiesIvan Oseledets이 arXiv에 게시한 'Back to Basics: Revisiting Exploration in Reinforcement Learning for LLM Reasoning via Generative Probabilities' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Exploration-Exploitation#Group Relative Policy Optimization#Entropy Collapse#Generative Models#Confidence-Aware Rewards2026년 2월 8일댓글 수 로딩 중
[논문리뷰] Parallel-Probe: Towards Efficient Parallel Thinking via 2D ProbingarXiv에 게시된 'Parallel-Probe: Towards Efficient Parallel Thinking via 2D Probing' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Parallel Thinking#Efficiency Optimization#2D Probing#Consensus-based Early Stopping#Deviation-based Branch Pruning#Test-Time Scaling2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Less Noise, More Voice: Reinforcement Learning for Reasoning via Instruction PurificationarXiv에 게시된 'Less Noise, More Voice: Reinforcement Learning for Reasoning via Instruction Purification' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Instruction Purification#Interference Tokens#Sample Efficiency#Policy Optimization#Verifiable Rewards2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Pushing the Boundaries of Natural Reasoning: Interleaved Bonus from Formal-Logic VerificationarXiv에 게시된 'Pushing the Boundaries of Natural Reasoning: Interleaved Bonus from Formal-Logic Verification' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Formal Verification#Neuro-Symbolic AI#Reinforcement Learning#Supervised Fine-tuning#Logic Consistency#Mathematical Reasoning2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution SharpeningHaitham Bou Ammar이 arXiv에 게시한 'Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Distribution Sharpening#Power Sampling#Training-Free#Monte Carlo Estimation#Jackknife Correction#Autoregressive Generation#Inference Efficiency2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityarXiv에 게시된 'Teaching Models to Teach Themselves: Reasoning at the Edge of Learnability' 논문에 대한 자세한 리뷰입니다.#Review#Meta-RL#Curriculum Learning#Self-Play#LLM Reasoning#Sparse Rewards#Question Generation#Bilevel Optimization2026년 1월 26일댓글 수 로딩 중
[논문리뷰] ExpSeek: Self-Triggered Experience Seeking for Web AgentsarXiv에 게시된 'ExpSeek: Self-Triggered Experience Seeking for Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Web Agents#Experience Seeking#Self-Triggered#LLM Reasoning#Entropy#Proactive Guidance#Reinforcement Learning#Foundation Models2026년 1월 14일댓글 수 로딩 중
[논문리뷰] EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMsarXiv에 게시된 'EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Model Calibration#Epistemic Uncertainty#Self-Training#Supervised Fine-tuning#Confidence-Informed Self-Consistency#Model Collapse2026년 1월 13일댓글 수 로딩 중
[논문리뷰] Fantastic Reasoning Behaviors and Where to Find Them: Unsupervised Discovery of the Reasoning ProcessarXiv에 게시된 'Fantastic Reasoning Behaviors and Where to Find Them: Unsupervised Discovery of the Reasoning Process' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Mechanistic Interpretability#Sparse Autoencoders (SAEs)#Activation Steering#Unsupervised Learning#Reasoning Behaviors#Chain-of-Thought#Feature Disentanglement2025년 12월 31일댓글 수 로딩 중
[논문리뷰] Schoenfeld's Anatomy of Mathematical Reasoning by Language ModelsTianyi Zhou이 arXiv에 게시한 'Schoenfeld's Anatomy of Mathematical Reasoning by Language Models' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Cognitive Science#Schoenfeld's Episode Theory#Mathematical Problem Solving#Reasoning Dynamics#Interpretable AI#Behavioral Analysis2025년 12월 25일댓글 수 로딩 중
[논문리뷰] SCALE: Selective Resource Allocation for Overcoming Performance Bottlenecks in Mathematical Test-time ScalingarXiv에 게시된 'SCALE: Selective Resource Allocation for Overcoming Performance Bottlenecks in Mathematical Test-time Scaling' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Test-time Scaling#Resource Allocation#Dual-process Theory#Mathematical Reasoning#Adaptive Computation#Performance Optimization2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Rectifying LLM Thought from Lens of OptimizationKai Chen이 arXiv에 게시한 'Rectifying LLM Thought from Lens of Optimization' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#RLVR#Optimization Framework#Process-level Reward#Gradient Descent#Reasoning Efficiency#Suboptimal Reasoning2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input InformationKristian Kersting이 arXiv에 게시한 'Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#Prompt Engineering#Efficiency#Structured Input#Information Extraction#Cognitive Psychology#Token Reduction2025년 11월 30일댓글 수 로딩 중
[논문리뷰] The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched ComputearXiv에 게시된 'The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute' 논문에 대한 자세한 리뷰입니다.#Review#Sequential Reasoning#Parallel Self-Consistency#Inverse-Entropy Voting#LLM Reasoning#Test-Time Scaling#Inference Optimization#Iterative Refinement#Error Correction2025년 11월 9일댓글 수 로딩 중
[논문리뷰] RiddleBench: A New Generative Reasoning Benchmark for LLMsarXiv에 게시된 'RiddleBench: A New Generative Reasoning Benchmark for LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Generative AI#Benchmark#Logical Deduction#Spatial Reasoning#Constraint Satisfaction#Hallucination Cascade#Self-Correction2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Attention Illuminates LLM Reasoning: The Preplan-and-Anchor Rhythm Enables Fine-Grained Policy OptimizationarXiv에 게시된 'Attention Illuminates LLM Reasoning: The Preplan-and-Anchor Rhythm Enables Fine-Grained Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Attention Mechanisms#Reinforcement Learning#Credit Assignment#Policy Optimization#Interpretability#Preplan-and-Anchor Rhythm#Generative Models2025년 10월 16일댓글 수 로딩 중
[논문리뷰] Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement LearningarXiv에 게시된 'Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Meta-Awareness#Reinforcement Learning#Self-Alignment#LLM Reasoning#Training Efficiency#Generalization#Predictive Gating2025년 10월 10일댓글 수 로딩 중
[논문리뷰] DeepPrune: Parallel Scaling without Inter-trace RedundancyarXiv에 게시된 'DeepPrune: Parallel Scaling without Inter-trace Redundancy' 논문에 대한 자세한 리뷰입니다.#Review#Parallel Scaling#Chain-of-Thought#LLM Reasoning#Dynamic Pruning#Inter-trace Redundancy#Judge Model#Resource Efficiency#Answer Diversity2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Revisiting the Uniform Information Density Hypothesis in LLM Reasoning TracesarXiv에 게시된 'Revisiting the Uniform Information Density Hypothesis in LLM Reasoning Traces' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#Uniform Information Density#Information Theory#Reasoning Trace Analysis#Entropy#Mathematical Reasoning#Model Evaluation2025년 10월 9일댓글 수 로딩 중
[논문리뷰] MixReasoning: Switching Modes to ThinkarXiv에 게시된 'MixReasoning: Switching Modes to Think' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#Efficiency#LoRA#Adaptive Reasoning#Token Uncertainty#Dynamic Switching#Reasoning Compression2025년 10월 8일댓글 수 로딩 중
[논문리뷰] SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMsarXiv에 게시된 'SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Latent Thinking#Explicit Thinking#Training-Free#Token Efficiency#Accuracy Improvement#Dynamic Switching#Entropy-based Control2025년 10월 7일댓글 수 로딩 중
[논문리뷰] MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual InformationarXiv에 게시된 'MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Tree Search#Pointwise Mutual Information (PMI)#Dynamic Sampling#Beam Search#Weighted Voting#Information Theory#Computational Efficiency2025년 10월 7일댓글 수 로딩 중
[논문리뷰] LSPO: Length-aware Dynamic Sampling for Policy Optimization in LLM ReasoningarXiv에 게시된 'LSPO: Length-aware Dynamic Sampling for Policy Optimization in LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#RLVR#Dynamic Sampling#Policy Optimization#Response Length#Meta-RL#Overthinking2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Random Policy Valuation is Enough for LLM Reasoning with Verifiable RewardsBinxing Jiao이 arXiv에 게시한 'Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Policy Valuation#Markov Decision Process#Diversity#Math Reasoning#Verifiable Rewards2025년 9월 30일댓글 수 로딩 중
[논문리뷰] Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context RetrievalarXiv에 게시된 'Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval' 논문에 대한 자세한 리뷰입니다.#Review#RAG#LLM Reasoning#Knowledge Graphs#Multi-Agent Systems#Context Retrieval#Heterogeneous Graphs#Adaptive Learning#Dual-Evolution2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Quantile Advantage Estimation for Entropy-Safe ReasoningAn Zhang이 arXiv에 게시한 'Quantile Advantage Estimation for Entropy-Safe Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Reasoning#Entropy Control#Advantage Estimation#Quantile Baseline#Exploration-Exploitation#RLVR2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Reasoning Core: A Scalable RL Environment for LLM Symbolic ReasoningDamien Sileo이 arXiv에 게시한 'Reasoning Core: A Scalable RL Environment for LLM Symbolic Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Symbolic AI#Reinforcement Learning#Procedural Content Generation#Verifiable Rewards#Adaptive Curricula#First-Order Logic#PDDL Planning2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Staying in the Sweet Spot: Responsive Reasoning Evolution via Capability-Adaptive Hint ScaffoldingYongcheng Zeng이 arXiv에 게시한 'Staying in the Sweet Spot: Responsive Reasoning Evolution via Capability-Adaptive Hint Scaffolding' 논문에 대한 자세한 리뷰입니다.#Review#RLVR#LLM Reasoning#Adaptive Learning#Hint Scaffolding#Item Response Theory#Exploration Efficiency#Problem Difficulty#Policy Optimization2025년 9월 10일댓글 수 로딩 중
[논문리뷰] StepWiser: Stepwise Generative Judges for Wiser ReasoningOlga Golovneva이 arXiv에 게시한 'StepWiser: Stepwise Generative Judges for Wiser Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Process Reward Models#Reinforcement Learning#Generative Judges#Stepwise Feedback#Chain-of-Thought#Meta-Reasoning2025년 8월 28일댓글 수 로딩 중
[논문리뷰] CARFT: Boosting LLM Reasoning via Contrastive Learning with Annotated Chain-of-Thought-based Reinforced Fine-TuningYulun Zhang이 arXiv에 게시한 'CARFT: Boosting LLM Reasoning via Contrastive Learning with Annotated Chain-of-Thought-based Reinforced Fine-Tuning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Contrastive Learning#Reinforcement Learning#Fine-tuning#Chain-of-Thought (CoT)#Annotated Data#Model Stability2025년 8월 25일댓글 수 로딩 중
[논문리뷰] Deep Think with ConfidenceXuewei Wang이 arXiv에 게시한 'Deep Think with Confidence' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Confidence Filtering#Self-Consistency#Test-Time Optimization#Computational Efficiency#Adaptive Sampling#Early Stopping#Majority Voting2025년 8월 22일댓글 수 로딩 중
[논문리뷰] Part I: Tricks or Traps? A Deep Dive into RL for LLM ReasoningJiaheng Liu이 arXiv에 게시한 'Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#LLM Reasoning#Policy Optimization#Normalization#Clipping#Loss Aggregation#Overlong Filtering2025년 8월 12일댓글 수 로딩 중