[논문리뷰] THINKSAFE: Self-Generated Safety Alignment for Reasoning ModelsMinki Kang이 arXiv에 게시한 'THINKSAFE: Self-Generated Safety Alignment for Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Safety Alignment#Self-Distillation#Refusal Steering#Distributional Shift#Chain-of-Thought#Reinforcement Learning2026년 2월 1일댓글 수 로딩 중