[논문리뷰] A Systematic Study of Cross-Modal Typographic Attacks on Audio-Visual ReasoningDeepti Ghadiyaram이 arXiv에 게시한 'A Systematic Study of Cross-Modal Typographic Attacks on Audio-Visual Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-modal Large Language Models#Audio Typography#Adversarial Attack#Cross-modal Robustness#Semantic Steering#Safety Application#Content Moderation2026년 4월 8일댓글 수 로딩 중
[논문리뷰] When the Prompt Becomes Visual: Vision-Centric Jailbreak Attacks for Large Image Editing ModelsarXiv에 게시된 'When the Prompt Becomes Visual: Vision-Centric Jailbreak Attacks for Large Image Editing Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Centric Jailbreak Attack#Image Editing Models#Safety Benchmark#IESBench#Multimodal Reasoning#Adversarial Attack#Defense Mechanism2026년 2월 11일댓글 수 로딩 중
[논문리뷰] Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language ModelsGuanhong Tao이 arXiv에 게시한 'Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#LVLM Security#Token Compression#Adversarial Attack#Robustness Degradation#Compression-Aware Attack#Efficiency-Security Trade-off#Black-box Attack2026년 1월 26일댓글 수 로딩 중
[논문리뷰] GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMsarXiv에 게시된 'GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs' 논문에 대한 자세한 리뷰입니다.#Review#MoE LLM#Safety Alignment#Adversarial Attack#Neuron Pruning#Gate-level Profiling#Transfer Attack#Vision Language Model2025년 12월 30일댓글 수 로딩 중
[논문리뷰] DeContext as Defense: Safe Image Editing in Diffusion TransformersarXiv에 게시된 'DeContext as Defense: Safe Image Editing in Diffusion Transformers' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Transformers#Image Editing#Privacy Protection#Adversarial Attack#Attention Mechanism#Identity Preservation#Deepfake Defense#In-context Learning2025년 12월 18일댓글 수 로딩 중
[논문리뷰] In-Context Representation Hijackingyossig이 arXiv에 게시한 'In-Context Representation Hijacking' 논문에 대한 자세한 리뷰입니다.#Review#LLM Jailbreak#In-Context Learning#Representation Hijacking#Mechanistic Interpretability#LLM Safety#Adversarial Attack#Semantic Shift2025년 12월 3일댓글 수 로딩 중
[논문리뷰] Adversarial Confusion Attack: Disrupting Multimodal Large Language ModelsArtur Janicki이 arXiv에 게시한 'Adversarial Confusion Attack: Disrupting Multimodal Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Adversarial Attack#Multimodal Large Language Models (MLLMs)#Entropy Maximization#Confusion Attack#Black-box Transfer#PGD#AI Agent Safety2025년 12월 3일댓글 수 로딩 중
[논문리뷰] Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language ModelsarXiv에 게시된 'Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Adversarial Attack#Jailbreaking#Reward Hacking#Content Moderation Bypass#Cross-Model Transferability#Safety Vulnerabilities2025년 11월 23일댓글 수 로딩 중
[논문리뷰] Distractor Injection Attacks on Large Reasoning Models: Characterization and DefensearXiv에 게시된 'Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Prompt Injection#Adversarial Attack#Reasoning Distraction#Chain-of-Thought#Robustness#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)2025년 10월 21일댓글 수 로딩 중
[논문리뷰] IAG: Input-aware Backdoor Attack on VLMs for Visual GroundingDi Zhang이 arXiv에 게시한 'IAG: Input-aware Backdoor Attack on VLMs for Visual Grounding' 논문에 대한 자세한 리뷰입니다.#Review#Backdoor Attack#Vision-Language Models (VLMs)#Visual Grounding#Input-aware Trigger#Adversarial Attack#Security#U-Net#Open-vocabulary2025년 8월 14일댓글 수 로딩 중
[논문리뷰] Adversarial Video Promotion Against Text-to-Video RetrievalShuai Liu이 arXiv에 게시한 'Adversarial Video Promotion Against Text-to-Video Retrieval' 논문에 대한 자세한 리뷰입니다.#Review#Adversarial Attack#Video Promotion#Text-to-Video Retrieval#Modality Refinement#Black-box Attack#Video Manipulation#Transferability2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Fact2Fiction: Targeted Poisoning Attack to Agentic Fact-checking SystemReynold Cheng이 arXiv에 게시한 'Fact2Fiction: Targeted Poisoning Attack to Agentic Fact-checking System' 논문에 대한 자세한 리뷰입니다.#Review#Adversarial Attack#Poisoning Attack#Fact-checking#LLM Agent#Retrieval Augmented Generation#Misinformation#System Security2025년 8월 12일댓글 수 로딩 중