[논문리뷰] EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMsarXiv에 게시된 'EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Model Calibration#Epistemic Uncertainty#Self-Training#Supervised Fine-tuning#Confidence-Informed Self-Consistency#Model Collapse2026년 1월 13일댓글 수 로딩 중
[논문리뷰] RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language ModelsarXiv에 게시된 'RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language Models' 논문에 대한 자세한 리뷰입니다.#Review#RAG Systems#Selective Refusal#Generative Evaluation#Linguistic Perturbations#LLM Evaluation#Informational Uncertainty#Model Calibration#AI Safety2025년 10월 17일댓글 수 로딩 중
[논문리뷰] How Confident are Video Models? Empowering Video Models to Express their UncertaintyAnirudha Majumdar이 arXiv에 게시한 'How Confident are Video Models? Empowering Video Models to Express their Uncertainty' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#Uncertainty Quantification#Aleatoric Uncertainty#Epistemic Uncertainty#Model Calibration#Text-to-Video#Generative AI#VMF Distribution2025년 10월 6일댓글 수 로딩 중