[논문리뷰] AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and SecurityarXiv에 게시된 'AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Safety Guardrails#Explainable AI (XAI)#Risk Taxonomy#Benchmarking#LLM Safety#Tool Use#Agent Alignment2026년 1월 27일댓글 수 로딩 중
[논문리뷰] OmniSafeBench-MM: A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack-Defense EvaluationSimeng Qin이 arXiv에 게시한 'OmniSafeBench-MM: A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack-Defense Evaluation' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Jailbreak Attack#Attack-Defense Evaluation#Benchmark#Safety Alignment#Vulnerability Analysis#Risk Taxonomy#Evaluation Metrics2025년 12월 8일댓글 수 로딩 중