[논문리뷰] OmniSafeBench-MM: A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack-Defense EvaluationSimeng Qin이 arXiv에 게시한 'OmniSafeBench-MM: A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack-Defense Evaluation' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Jailbreak Attack#Attack-Defense Evaluation#Benchmark#Safety Alignment#Vulnerability Analysis#Risk Taxonomy#Evaluation Metrics2025년 12월 8일댓글 수 로딩 중
[논문리뷰] When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign InputsDasol Choi이 arXiv에 게시한 'When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs' 논문에 대한 자세한 리뷰입니다.#Review#Audio-Language Models#Jailbreak Attack#Adversarial Audio#Reinforcement Learning#Projected Gradient Descent#Native Payload Discovery#Multimodal AI Safety2025년 8월 12일댓글 수 로딩 중