[논문리뷰] Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool UsearXiv에 게시된 'Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use' 논문에 대한 자세한 리뷰입니다.#Review#Agentic LLM#AI Safety#Multi-Step Tool Use#Reinforcement Learning#Preference-Based Learning#Safety Guardrails#Refusal Mechanism#Structured Reasoning2026년 3월 3일댓글 수 로딩 중
[논문리뷰] AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and SecurityarXiv에 게시된 'AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Safety Guardrails#Explainable AI (XAI)#Risk Taxonomy#Benchmarking#LLM Safety#Tool Use#Agent Alignment2026년 1월 27일댓글 수 로딩 중