[논문리뷰] Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable RewardsarXiv에 게시된 'Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Calibration#Over-confidence#Decoupled Optimization#Verifiable Rewards#Policy Optimization#Expected Calibration Error2026년 3월 10일댓글 수 로딩 중
[논문리뷰] QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented GenerationLu Cheng이 arXiv에 게시한 'QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation' 논문에 대한 자세한 리뷰입니다.#Review#Dynamic RAG#Hallucination Detection#Corpus Statistics#Uncertainty Quantification#Pre-training Data#LLM Calibration#Infini-gram#Multi-hop QA2025년 12월 22일댓글 수 로딩 중
[논문리뷰] CritiCal: Can Critique Help LLM Uncertainty or Confidence Calibration?Baixuan Xu이 arXiv에 게시한 'CritiCal: Can Critique Help LLM Uncertainty or Confidence Calibration?' 논문에 대한 자세한 리뷰입니다.#Review#LLM Calibration#Confidence Calibration#Uncertainty Estimation#Critique Learning#Supervised Fine-Tuning#Natural Language Processing#Self-Critique2025년 11월 9일댓글 수 로딩 중