[논문리뷰] Layer by layer, module by module: Choose both for optimal OOD probing of ViTIevgen Redko이 arXiv에 게시한 'Layer by layer, module by module: Choose both for optimal OOD probing of ViT' 논문에 대한 자세한 리뷰입니다.#Review#Vision Transformer#Out-of-Distribution#Linear Probing#Distribution Shift#Foundation Models#Intermediate Layers#Module Analysis2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Test-Time Spectrum-Aware Latent Steering for Zero-Shot Generalization in Vision-Language ModelsarXiv에 게시된 'Test-Time Spectrum-Aware Latent Steering for Zero-Shot Generalization in Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Test-Time Adaptation#Zero-Shot Generalization#Spectral Decomposition#Latent Space Steering#SVD#Out-of-Distribution2025년 11월 17일댓글 수 로딩 중
[논문리뷰] Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions?Yu Fu이 arXiv에 게시한 'Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions?' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#Instruction Following#Benchmark#Cognitive Inertia#Out-of-Distribution#Supervised Fine-Tuning#Evaluation#Robustness2025년 9월 5일댓글 수 로딩 중
[논문리뷰] RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy OptimizationKechi Zhang이 arXiv에 게시한 'RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Capability Collapse#Hybrid Policy Optimization#Multiple Importance Sampling#Exploration#Math Reasoning#Out-of-Distribution2025년 8월 7일댓글 수 로딩 중