[논문리뷰] MUSE: A Run-Centric Platform for Multimodal Unified Safety Evaluation of Large Language ModelsYiran Chen이 arXiv에 게시한 'MUSE: A Run-Centric Platform for Multimodal Unified Safety Evaluation of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Safety Evaluation#Red Teaming#Adversarial Attacks#Modality Switching#LLM Alignment#Compliance#ASR2026년 3월 4일댓글 수 로딩 중
[논문리뷰] Visual Memory Injection Attacks for Multi-Turn ConversationsMatthias Hein이 arXiv에 게시한 'Visual Memory Injection Attacks for Multi-Turn Conversations' 논문에 대한 자세한 리뷰입니다.#Review#LVLM#Adversarial Attacks#Multi-Turn Conversations#Visual Memory Injection#Stealthy Attacks#Benign Anchoring#Context-Cycling2026년 2월 18일댓글 수 로딩 중
[논문리뷰] Few Tokens Matter: Entropy Guided Attacks on Vision-Language ModelsarXiv에 게시된 'Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Adversarial Attacks#Entropy-Guided Attacks#Token Vulnerability#Harmful Content#Cross-Model Transferability#Autoregressive Generation2026년 1월 8일댓글 수 로딩 중
[논문리뷰] M-ErasureBench: A Comprehensive Multimodal Evaluation Benchmark for Concept Erasure in Diffusion ModelsJun-Cheng Chen이 arXiv에 게시한 'M-ErasureBench: A Comprehensive Multimodal Evaluation Benchmark for Concept Erasure in Diffusion Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Concept Erasure#Multimodal Evaluation#Adversarial Attacks#Robustness#Textual Inversion#Latent Inversion#Cross-Attention2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Pay Less Attention to Function Words for Free Robustness of Vision-Language ModelsarXiv에 게시된 'Pay Less Attention to Function Words for Free Robustness of Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Adversarial Robustness#Function Words#Cross-Attention#Adversarial Attacks#Differential Attention#Vision-Language Alignment2025년 12월 10일댓글 수 로딩 중
[논문리뷰] Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPOarXiv에 게시된 'Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPO' 논문에 대한 자세한 리뷰입니다.#Review#Decentralized RL#GRPO#LLM Post-training#Adversarial Attacks#Data Poisoning#Defense Mechanisms#In-context Attack#Out-of-context Attack2025년 11월 13일댓글 수 로딩 중
[논문리뷰] The Alignment Waltz: Jointly Training Agents to Collaborate for SafetyarXiv에 게시된 'The Alignment Waltz: Jointly Training Agents to Collaborate for Safety' 논문에 대한 자세한 리뷰입니다.#Review#LLM Safety#Multi-agent Reinforcement Learning#Safety Alignment#Overrefusal#Adversarial Attacks#Feedback Agent#Conversation Agent#Dynamic Improvement Reward2025년 10월 10일댓글 수 로딩 중
[논문리뷰] WAInjectBench: Benchmarking Prompt Injection Detections for Web AgentsNeil Zhenqiang Gong이 arXiv에 게시한 'WAInjectBench: Benchmarking Prompt Injection Detections for Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Injection#Web Agents#Multimodal AI#Adversarial Attacks#Detection Benchmarking#Large Language Models#Image-based Detection#Text-based Detection2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful PromptsLiming Fang이 arXiv에 게시한 'Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts' 논문에 대한 자세한 리뷰입니다.#Review#LLM Jailbreaking#Red Teaming#Malicious Content Detection#Developer Messages#D-Attack#DH-CoT#Adversarial Attacks#Dataset Cleaning2025년 8월 25일댓글 수 로딩 중