[논문리뷰] AdvEvo-MARL: Shaping Internalized Safety through Adversarial Co-Evolution in Multi-Agent Reinforcement LearningZeliang Zhang이 arXiv에 게시한 'AdvEvo-MARL: Shaping Internalized Safety through Adversarial Co-Evolution in Multi-Agent Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent Reinforcement Learning#Adversarial Co-evolution#LLM Safety#Jailbreak Attacks#Internalized Safety#Public Baseline#System Robustness2025년 10월 7일댓글 수 로딩 중
[논문리뷰] OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!arXiv에 게시된 'OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Operational Safety#Out-of-Domain (OOD)#Prompt Steering#Jailbreak Attacks#Evaluation Benchmark#Refusal Rate2025년 10월 1일댓글 수 로딩 중