[논문리뷰] Believe Your Model: Distribution-Guided Confidence CalibrationMofei Song이 arXiv에 게시한 'Believe Your Model: Distribution-Guided Confidence Calibration' 논문에 대한 자세한 리뷰입니다.#Review#Confidence Calibration#Test-Time Scaling#Large Reasoning Models (LRMs)#Gaussian Mixture Models (GMM)#Hierarchical Voting#Self-Reflection#Distributional Priors2026년 3월 9일댓글 수 로딩 중
[논문리뷰] GlimpRouter: Efficient Collaborative Inference by Glimpsing One Token of ThoughtsarXiv에 게시된 'GlimpRouter: Efficient Collaborative Inference by Glimpsing One Token of Thoughts' 논문에 대한 자세한 리뷰입니다.#Review#Collaborative Inference#Large Reasoning Models (LRMs)#Inference Latency#Step-wise Routing#Initial Token Entropy#Dynamic Routing#Computational Efficiency2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Distractor Injection Attacks on Large Reasoning Models: Characterization and DefensearXiv에 게시된 'Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Prompt Injection#Adversarial Attack#Reasoning Distraction#Chain-of-Thought#Robustness#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)2025년 10월 21일댓글 수 로딩 중
[논문리뷰] ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and InterpretabilityTsui-Wei Weng이 arXiv에 게시한 'ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and Interpretability' 논문에 대한 자세한 리뷰입니다.#Review#Trustworthy AI#Large Reasoning Models (LRMs)#Interpretability#Faithfulness#Reliability#Chain-of-Thought (CoT)#Supervised Fine-tuning (SFT)#GRPO2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Mitigating Overthinking through Reasoning ShapingWen Luo이 arXiv에 게시한 'Mitigating Overthinking through Reasoning Shaping' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#RLVR#Overthinking Mitigation#Reasoning Shaping#Segment-level Penalization#Computational Efficiency#Training Stability#Length-aware Weighting2025년 10월 13일댓글 수 로딩 중
[논문리뷰] ScaleDiff: Scaling Difficult Problems for Advanced Mathematical ReasoningYu Li이 arXiv에 게시한 'ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mathematical Reasoning#Large Reasoning Models (LRMs)#Difficulty Scaling#Data Augmentation#Supervised Fine-Tuning (SFT)#Problem Generation#Solution Distillation2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for InformationXi Yang이 arXiv에 게시한 'Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for Information' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Information Seeking#Incomplete Problems#Mathematical Reasoning#Supervised Fine-tuning (SFT)#Overthinking#Hallucination#CRITIC-math2025년 8월 19일댓글 수 로딩 중