[논문리뷰] Are Audio-Language Models Listening? Audio-Specialist Heads for Adaptive Audio SteeringarXiv에 게시된 'Are Audio-Language Models Listening? Audio-Specialist Heads for Adaptive Audio Steering' 논문에 대한 자세한 리뷰입니다.#Review#Audio-Language Models (LALMs)#Text Dominance#Mechanistic Interpretability#Attention Heads#Activation Steering#Multimodal Grounding#Inference-time Intervention2026년 3월 10일댓글 수 로딩 중
[논문리뷰] Query-focused and Memory-aware Reranker for Long Context ProcessingarXiv에 게시된 'Query-focused and Memory-aware Reranker for Long Context Processing' 논문에 대한 자세한 리뷰입니다.#Review#Reranking#Large Language Models#Long Context#Attention Heads#Retrieval Augmented Generation (RAG)#Listwise Reranking#Query-focused Retrieval#Memory-aware2026년 2월 24일댓글 수 로딩 중
[논문리뷰] Which Heads Matter for Reasoning? RL-Guided KV Cache CompressionHuan Wang이 arXiv에 게시한 'Which Heads Matter for Reasoning? RL-Guided KV Cache Compression' 논문에 대한 자세한 리뷰입니다.#Review#KV Cache Compression#Large Language Models (LLMs)#Reinforcement Learning (RL)#Reasoning Models#Attention Heads#Chain-of-Thought (CoT)#Memory Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?arXiv에 게시된 'Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?' 논문에 대한 자세한 리뷰입니다.#Review#Safety Alignment#Large Reasoning Models#Mechanistic Interpretability#Refusal Cliff#Attention Heads#Data Selection#Linear Probing2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post TrainingarXiv에 게시된 'Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post Training' 논문에 대한 자세한 리뷰입니다.#Review#Mechanistic Interpretability#Attention Heads#Post-Training#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Circuit Analysis#Reasoning Models#Transformer Architecture2025년 10월 1일댓글 수 로딩 중