[논문리뷰] MonitorBench: A Comprehensive Benchmark for Chain-of-Thought Monitorability in Large Language ModelsarXiv에 게시된 'MonitorBench: A Comprehensive Benchmark for Chain-of-Thought Monitorability in Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Chain-of-Thought#Monitorability#Benchmark#AI Safety#Stress-Test#Faithfulness2026년 3월 31일댓글 수 로딩 중
[논문리뷰] The Reasoning Trap -- Logical Reasoning as a Mechanistic Pathway to Situational AwarenessDivya Chaudhary이 arXiv에 게시한 'The Reasoning Trap -- Logical Reasoning as a Mechanistic Pathway to Situational Awareness' 논문에 대한 자세한 리뷰입니다.#Review#Logical Reasoning#Situational Awareness#LLMs#Deceptive Alignment#AI Safety#RAISE Framework#Self-Modeling#Deduction#Induction#Abduction2026년 3월 10일댓글 수 로딩 중
[논문리뷰] SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-ImprovementDivya Chaudhary이 arXiv에 게시한 'SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-Improvement' 논문에 대한 자세한 리뷰입니다.#Review#Recursive Self-Improvement#Alignment Drift#AI Safety#Goal Drift Index (GDI)#Constraint Preservation#Regression Risk#Capability Alignment Ratio (CAR)2026년 3월 10일댓글 수 로딩 중
[논문리뷰] Reasoning Models Struggle to Control their Chains of ThoughtarXiv에 게시된 'Reasoning Models Struggle to Control their Chains of Thought' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Model Controllability#AI Safety#Monitorability#Large Language Models (LLMs)#Reinforcement Learning (RL)#Evaluation Suite2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool UsearXiv에 게시된 'Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use' 논문에 대한 자세한 리뷰입니다.#Review#Agentic LLM#AI Safety#Multi-Step Tool Use#Reinforcement Learning#Preference-Based Learning#Safety Guardrails#Refusal Mechanism#Structured Reasoning2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Exposing the Systematic Vulnerability of Open-Weight Models to Prefill AttacksarXiv에 게시된 'Exposing the Systematic Vulnerability of Open-Weight Models to Prefill Attacks' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Prefill Attacks#AI Safety#Red Teaming#Vulnerability#Open-Weight Models#Jailbreaking#Generative AI2026년 2월 16일댓글 수 로딩 중
[논문리뷰] The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI SocietiesJinyu Hou이 arXiv에 게시한 'The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies' 논문에 대한 자세한 리뷰입니다.#Review#Multi-agent Systems#Self-evolution#AI Safety#Alignment Drift#Information Theory#Thermodynamics#Entropy Accumulation#Moltbook2026년 2월 12일댓글 수 로딩 중
[논문리뷰] A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5Yutao Wu이 arXiv에 게시한 'A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Doubao 1.8, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5' 논문에 대한 자세한 리뷰입니다.#Review#AI Safety#Large Language Models#Multimodal LLMs#Benchmark Evaluation#Adversarial Robustness#Multilingual Evaluation#Regulatory Compliance#Image Generation Safety2026년 1월 15일댓글 수 로딩 중
[논문리뷰] Steerability of Instrumental-Convergence Tendencies in LLMsj-hoscilowic이 arXiv에 게시한 'Steerability of Instrumental-Convergence Tendencies in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Steerability#Instrumental Convergence#AI Safety#AI Security#Open-Weight Models#Prompt Engineering#Model Control#Behavioral Alignment2026년 1월 6일댓글 수 로딩 중
[논문리뷰] K-EXAONE Technical ReportarXiv에 게시된 'K-EXAONE Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Language Model#Mixture-of-Experts (MoE)#Long Context#AI Safety#Korean AI#Foundation Model#Reinforcement Learning (RL)2026년 1월 5일댓글 수 로딩 중
[논문리뷰] COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMsarXiv에 게시된 'COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Policy Alignment#Organizational Policies#AI Safety#Adversarial Robustness#Refusal Behavior#Prompt Engineering#Fine-tuning2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Reinventing Clinical Dialogue: Agentic Paradigms for LLM Enabled Healthcare CommunicationHengshu Zhu이 arXiv에 게시한 'Reinventing Clinical Dialogue: Agentic Paradigms for LLM Enabled Healthcare Communication' 논문에 대한 자세한 리뷰입니다.#Review#Clinical Dialogue#LLM Agents#Healthcare AI#Agentic Paradigm#Medical Decision Support#Knowledge Grounding#AI Safety#Workflow Automation2025년 12월 10일댓글 수 로딩 중
[논문리뷰] AI & Human Co-Improvement for Safer Co-SuperintelligencearXiv에 게시된 'AI & Human Co-Improvement for Safer Co-Superintelligence' 논문에 대한 자세한 리뷰입니다.#Review#AI Safety#Superintelligence#Human-AI Collaboration#Self-Improving AI#Co-Improvement#Alignment#AI Research Agents2025년 12월 7일댓글 수 로딩 중
[논문리뷰] Machine Text Detectors are Membership Inference AttacksNaoaki Okazaki이 arXiv에 게시한 'Machine Text Detectors are Membership Inference Attacks' 논문에 대한 자세한 리뷰입니다.#Review#Membership Inference Attacks#Machine-Generated Text Detection#Transferability#Likelihood Ratio Test#Large Language Models#Zero-Shot Detection#Model Security#AI Safety2025년 10월 23일댓글 수 로딩 중
[논문리뷰] RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language ModelsarXiv에 게시된 'RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language Models' 논문에 대한 자세한 리뷰입니다.#Review#RAG Systems#Selective Refusal#Generative Evaluation#Linguistic Perturbations#LLM Evaluation#Informational Uncertainty#Model Calibration#AI Safety2025년 10월 17일댓글 수 로딩 중
[논문리뷰] Blueprints of Trust: AI System Cards for End to End Transparency and GovernanceRoman Zhukov이 arXiv에 게시한 'Blueprints of Trust: AI System Cards for End to End Transparency and Governance' 논문에 대한 자세한 리뷰입니다.#Review#AI Governance#Transparency#AI System Card#Hazard-Aware System Card#Data Provenance#AI Safety#AI Risk Management#ISO/IEC 420012025년 9월 26일댓글 수 로딩 중
[논문리뷰] FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questionstengdai722이 arXiv에 게시한 'FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#LLM Evaluation#Multimodal AI#Reasoning Behaviors#Hallucination#Contamination-Free#AI Safety#Instruction Following2025년 9월 23일댓글 수 로딩 중
[논문리뷰] R^textbf{2AI}: Towards Resistant and Resilient AI in an Evolving WorldBowen Zhou이 arXiv에 게시한 'R^textbf{2AI}: Towards Resistant and Resilient AI in an Evolving World' 논문에 대한 자세한 리뷰입니다.#Review#AI Safety#Resistant AI#Resilient AI#Coevolution#Fast-Slow Models#Adversarial Training#Continual Learning#AGI Alignment2025년 9월 9일댓글 수 로딩 중
[논문리뷰] False Sense of Security: Why Probing-based Malicious Input Detection Fails to GeneralizeMuhao Chen이 arXiv에 게시한 'False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize' 논문에 대한 자세한 리뷰입니다.#Review#LLM Safety#Malicious Input Detection#Probing Classifiers#Out-of-Distribution Generalization#Superficial Patterns#Instructional Patterns#Trigger Words#AI Safety2025년 9월 5일댓글 수 로딩 중
[논문리뷰] CorrSteer: Steering Improves Task Performance and Safety in LLMs through Correlation-based Sparse Autoencoder Feature SelectionAdriano Koshiyama이 arXiv에 게시한 'CorrSteer: Steering Improves Task Performance and Safety in LLMs through Correlation-based Sparse Autoencoder Feature Selection' 논문에 대한 자세한 리뷰입니다.#Review#Sparse Autoencoders#LLM Steering#Feature Selection#Correlation Analysis#AI Safety#Bias Mitigation#Mechanistic Interpretability2025년 8월 20일댓글 수 로딩 중
[논문리뷰] A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic SystemsXinhao Yi이 arXiv에 게시한 'A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolving AI Agents#Lifelong Learning#Foundation Models#Multi-Agent Systems#Agent Optimization#Prompt Engineering#Tool Use#AI Safety#Survey2025년 8월 12일댓글 수 로딩 중