[논문리뷰] Does Your Reasoning Model Implicitly Know When to Stop Thinking?arXiv에 게시된 'Does Your Reasoning Model Implicitly Know When to Stop Thinking?' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Chain of Thought#Efficient Inference#Self-Aware Sampling#Reinforcement Learning#Reasoning Termination#Mathematical Benchmarks2026년 2월 22일댓글 수 로딩 중
[논문리뷰] THINKSAFE: Self-Generated Safety Alignment for Reasoning ModelsMinki Kang이 arXiv에 게시한 'THINKSAFE: Self-Generated Safety Alignment for Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Safety Alignment#Self-Distillation#Refusal Steering#Distributional Shift#Chain-of-Thought#Reinforcement Learning2026년 2월 1일댓글 수 로딩 중
[논문리뷰] User-Oriented Multi-Turn Dialogue Generation with Tool Use at scalearXiv에 게시된 'User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Turn Dialogue Generation#Tool Use#Autonomous Agents#Large Reasoning Models#User Simulation#Synthetic Data Generation#SQL-based Tools#Agentic Benchmarks2026년 1월 13일댓글 수 로딩 중
[논문리뷰] When Reasoning Meets Its LawsLiu Ziyin이 arXiv에 게시한 'When Reasoning Meets Its Laws' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Reasoning Behaviors#Compute Law#Accuracy Law#Monotonicity#Compositionality#Fine-tuning#LORE-BENCH2025년 12월 21일댓글 수 로딩 중
[논문리뷰] MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning ModelsBin Yu이 arXiv에 게시한 'MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Factuality Alignment#Meta-Reasoning#Kahneman-Tversky Optimization#Chain-of-Thought#Hallucination#Process-Level Alignment2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance BoostMin Yang이 arXiv에 게시한 'Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost' 논문에 대한 자세한 리뷰입니다.#Review#Machine Translation Evaluation#Large Reasoning Models#LLM-as-a-judge#MQM#Fine-tuning#Thinking Calibration#Computational Efficiency#Meta-evaluation2025년 10월 27일댓글 수 로딩 중
[논문리뷰] R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?arXiv에 게시된 'R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?' 논문에 대한 자세한 리뷰입니다.#Review#Long-Horizon Reasoning#Query Composition#Large Reasoning Models#Reinforcement Learning#Benchmark Evaluation#Thinking Budget#Performance Degradation#Chain-of-Thought2025년 10월 13일댓글 수 로딩 중
[논문리뷰] CALM Before the STORM: Unlocking Native Reasoning for Optimization ModelingChengpeng Li이 arXiv에 게시한 'CALM Before the STORM: Unlocking Native Reasoning for Optimization Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Optimization Modeling#Reflective Generation#Supervised Fine-tuning#Reinforcement Learning#Human-in-the-Loop#Code Generation#Domain Adaptation2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?arXiv에 게시된 'Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?' 논문에 대한 자세한 리뷰입니다.#Review#Safety Alignment#Large Reasoning Models#Mechanistic Interpretability#Refusal Cliff#Attention Heads#Data Selection#Linear Probing2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Understanding the Thinking Process of Reasoning Models: A Perspective from Schoenfeld's Episode TheoryYanbin Fu이 arXiv에 게시한 'Understanding the Thinking Process of Reasoning Models: A Perspective from Schoenfeld's Episode Theory' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Cognitive Science#Schoenfeld's Episode Theory#Math Problem Solving#Chain-of-Thought#Behavioral Analysis#Dataset Annotation2025년 9월 26일댓글 수 로딩 중
[논문리뷰] What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoTAnthony Hartshorn이 arXiv에 게시한 'What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoT' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought#Reasoning Effectiveness#Large Reasoning Models#Failed-Step Fraction#Test-time Scaling#Reasoning Graph#Model Evaluation2025년 9월 24일댓글 수 로딩 중
[논문리뷰] FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questionstengdai722이 arXiv에 게시한 'FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#LLM Evaluation#Multimodal AI#Reasoning Behaviors#Hallucination#Contamination-Free#AI Safety#Instruction Following2025년 9월 23일댓글 수 로딩 중
[논문리뷰] A Survey of Reinforcement Learning for Large Reasoning ModelsRunze Liu이 arXiv에 게시한 'A Survey of Reinforcement Learning for Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Reasoning Models#LLMs#Reward Design#Policy Optimization#Verifiable Rewards#Agentic AI#Multimodal AI2025년 9월 11일댓글 수 로딩 중
[논문리뷰] HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web SearchesQiang Ju이 arXiv에 게시한 'HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches' 논문에 대한 자세한 리뷰입니다.#Review#Hierarchical Reinforcement Learning#Deep Search#Multi-source RAG#Agentic AI#Knowledge Integration#Enterprise Search#Large Reasoning Models2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Pruning the Unsurprising: Efficient Code Reasoning via First-Token SurprisalChengcheng Wan이 arXiv에 게시한 'Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal' 논문에 대한 자세한 리뷰입니다.#Review#Code Reasoning#CoT Compression#LLMs#Efficiency#Surprisal#Pruning#Fine-tuning#Large Reasoning Models2025년 8월 11일댓글 수 로딩 중
[논문리뷰] Don't Overthink It: A Survey of Efficient R1-style Large Reasoning ModelsFangzhou Yao이 arXiv에 게시한 'Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Efficient Reasoning#Chain-of-Thought#Model Optimization#Model Collaboration#Overthinking Problem#LLM Efficiency2025년 8월 8일댓글 수 로딩 중