[논문리뷰] Distractor Injection Attacks on Large Reasoning Models: Characterization and DefensearXiv에 게시된 'Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Prompt Injection#Adversarial Attack#Reasoning Distraction#Chain-of-Thought#Robustness#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)2025년 10월 21일댓글 수 로딩 중