[논문리뷰] When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign InputsDasol Choi이 arXiv에 게시한 'When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs' 논문에 대한 자세한 리뷰입니다.#Review#Audio-Language Models#Jailbreak Attack#Adversarial Audio#Reinforcement Learning#Projected Gradient Descent#Native Payload Discovery#Multimodal AI Safety2025년 8월 12일댓글 수 로딩 중