[논문리뷰] χ_{0}: Resource-Aware Robust Manipulation via Taming Distributional InconsistenciesarXiv에 게시된 'χ_{0}: Resource-Aware Robust Manipulation via Taming Distributional Inconsistencies' 논문에 대한 자세한 리뷰입니다.#Review#Robotic Manipulation#Distributional Shift#Imitation Learning#Model Arithmetic#Stage Advantage#Train-Deploy Alignment#Resource-Efficient AI#Long-Horizon Tasks2026년 2월 12일댓글 수 로딩 중
[논문리뷰] THINKSAFE: Self-Generated Safety Alignment for Reasoning ModelsMinki Kang이 arXiv에 게시한 'THINKSAFE: Self-Generated Safety Alignment for Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#Safety Alignment#Self-Distillation#Refusal Steering#Distributional Shift#Chain-of-Thought#Reinforcement Learning2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement LearningZijia Lin이 arXiv에 게시한 'Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Policy Optimization#Trust Region#Entropy Clipping#Large Language Models#Training Stability#Distributional Shift2025년 12월 7일댓글 수 로딩 중