[논문리뷰] Adversarial Confusion Attack: Disrupting Multimodal Large Language ModelsArtur Janicki이 arXiv에 게시한 'Adversarial Confusion Attack: Disrupting Multimodal Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Adversarial Attack#Multimodal Large Language Models (MLLMs)#Entropy Maximization#Confusion Attack#Black-box Transfer#PGD#AI Agent Safety2025년 12월 3일댓글 수 로딩 중