[논문리뷰] ToolSafe: Enhancing Tool Invocation Safety of LLM-based agents via Proactive Step-level Guardrail and FeedbackShikun Zhang이 arXiv에 게시한 'ToolSafe: Enhancing Tool Invocation Safety of LLM-based agents via Proactive Step-level Guardrail and Feedback' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Tool Use Safety#Guardrail#Step-level Safety Detection#Prompt Injection#Reinforcement Learning#Feedback Framework2026년 1월 15일댓글 수 로딩 중