[논문리뷰] SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-ImprovementDivya Chaudhary이 arXiv에 게시한 'SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-Improvement' 논문에 대한 자세한 리뷰입니다.#Review#Recursive Self-Improvement#Alignment Drift#AI Safety#Goal Drift Index (GDI)#Constraint Preservation#Regression Risk#Capability Alignment Ratio (CAR)2026년 3월 10일댓글 수 로딩 중
[논문리뷰] The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI SocietiesJinyu Hou이 arXiv에 게시한 'The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies' 논문에 대한 자세한 리뷰입니다.#Review#Multi-agent Systems#Self-evolution#AI Safety#Alignment Drift#Information Theory#Thermodynamics#Entropy Accumulation#Moltbook2026년 2월 12일댓글 수 로딩 중
[논문리뷰] TRACEALIGN -- Tracing the Drift: Attributing Alignment Failures to Training-Time Belief Sources in LLMsAman Chadha이 arXiv에 게시한 'TRACEALIGN -- Tracing the Drift: Attributing Alignment Failures to Training-Time Belief Sources in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Alignment Drift#Training Data Provenance#Belief Conflict Index (BCI)#Suffix Array#Safety Interventions#Reinforcement Learning from Human Feedback#Explainable AI2025년 8월 6일댓글 수 로딩 중