[논문리뷰] LinguDistill: Recovering Linguistic Ability in Vision- Language Models via Selective Cross-Modal DistillationarXiv에 게시된 'LinguDistill: Recovering Linguistic Ability in Vision- Language Models via Selective Cross-Modal Distillation' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Knowledge Distillation#Linguistic Ability#KV-cache Sharing#Multimodal Adaptation#Catastrophic Forgetting2026년 4월 2일댓글 수 로딩 중
[논문리뷰] On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language ModelsHaodong Lu이 arXiv에 게시한 'On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Continual Learning#Large Vision Language Models#Mixture of Experts#Routing-drift#Catastrophic Forgetting2026년 3월 30일댓글 수 로딩 중
[논문리뷰] CurveStream: Boosting Streaming Video Understanding in MLLMs via Curvature-Aware Hierarchical Visual Memory ManagementTao Chen이 arXiv에 게시한 'CurveStream: Boosting Streaming Video Understanding in MLLMs via Curvature-Aware Hierarchical Visual Memory Management' 논문에 대한 자세한 리뷰입니다.#Review#Streaming Video Understanding#MLLMs#Memory Management#Curvature Score#Hierarchical Visual Memory#Catastrophic Forgetting2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Online Experiential Learning for Language ModelsarXiv에 게시된 'Online Experiential Learning for Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Online Experiential Learning (OEL)#Context Distillation#Language Models#Reward-Free Learning#Catastrophic Forgetting#Token Efficiency#On-Policy Learning2026년 3월 17일댓글 수 로딩 중
[논문리뷰] Surgical Post-Training: Cutting Errors, Keeping KnowledgearXiv에 게시된 'Surgical Post-Training: Cutting Errors, Keeping Knowledge' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Catastrophic Forgetting#Direct Preference Optimization (DPO)#Reward-based Learning#Data Rectification#Binary Cross-Entropy#Reasoning Tasks#Knowledge Preservation2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Efficient Continual Learning in Language Models via Thalamically Routed Cortical ColumnsAfshin Khadangi이 arXiv에 게시한 'Efficient Continual Learning in Language Models via Thalamically Routed Cortical Columns' 논문에 대한 자세한 리뷰입니다.#Review#Continual Learning#Language Models#Sparse Routing#Cortical Columns#Thalamic Routing#Catastrophic Forgetting#Stability-Plasticity2026년 2월 26일댓글 수 로딩 중
[논문리뷰] Xiaomi-Robotics-0: An Open-Sourced Vision-Language-Action Model with Real-Time ExecutionarXiv에 게시된 'Xiaomi-Robotics-0: An Open-Sourced Vision-Language-Action Model with Real-Time Execution' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Real-Time Robotics#Diffusion Transformer#Flow Matching#Asynchronous Execution#Robot Manipulation#Pre-training#Catastrophic Forgetting2026년 2월 15일댓글 수 로딩 중
[논문리뷰] RLinf-Co: Reinforcement Learning-Based Sim-Real Co-Training for VLA ModelsarXiv에 게시된 'RLinf-Co: Reinforcement Learning-Based Sim-Real Co-Training for VLA Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Sim-to-Real#Co-training#VLA Models#Robotic Manipulation#Supervised Fine-tuning#Catastrophic Forgetting2026년 2월 15일댓글 수 로딩 중
[논문리뷰] TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-TransformersarXiv에 게시된 'TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-Transformers' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Embodied AI#Robotics#Catastrophic Forgetting#Asymmetric Mixture-of-Transformers (AsyMoT)#Generalist VLM#Specialist VLM#Flow-Matching2026년 1월 25일댓글 수 로딩 중
[논문리뷰] CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and ExpansionarXiv에 게시된 'CLARE: Continual Learning for Vision-Language-Action Models via Autonomous Adapter Routing and Expansion' 논문에 대한 자세한 리뷰입니다.#Review#Continual Learning#Vision-Language-Action Models#Adapter Learning#Catastrophic Forgetting#Autonomous Routing#Parameter-Efficient Learning#Robotics2026년 1월 19일댓글 수 로딩 중
[논문리뷰] Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate ForgettingarXiv에 게시된 'Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Catastrophic Forgetting#Entropy-Adaptive Fine-Tuning (EAFT)#Large Language Models (LLMs)#Domain Adaptation#Reinforcement Learning (RL)#Confident Conflicts2026년 1월 7일댓글 수 로딩 중
[논문리뷰] EtCon: Edit-then-Consolidate for Reliable Knowledge EditingChenglin Li이 arXiv에 게시한 'EtCon: Edit-then-Consolidate for Reliable Knowledge Editing' 논문에 대한 자세한 리뷰입니다.#Review#Knowledge Editing#Large Language Models#Lifelong Learning#Reinforcement Learning#Trust Region Policy Optimization#Chain-of-Thought#Catastrophic Forgetting2025년 12월 10일댓글 수 로딩 중
[논문리뷰] Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded UpdatesNikolaos Aletras이 arXiv에 게시한 'Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Catastrophic Forgetting#Language Adaptation#Continual Pre-training#Parameter Freezing#Low-Resource Languages#Source Knowledge Preservation2025년 12월 4일댓글 수 로딩 중
[논문리뷰] RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking ServicesZijie Meng이 arXiv에 게시한 'RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking Services' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Domain Adaptation#Social Networking Services#Reinforcement Learning#Supervised Fine-Tuning#Catastrophic Forgetting#Data Efficiency2025년 11월 10일댓글 수 로딩 중
[논문리뷰] RLoop: An Self-Improving Framework for Reinforcement Learning with Iterative Policy InitializationWenhao Huang이 arXiv에 게시한 'RLoop: An Self-Improving Framework for Reinforcement Learning with Iterative Policy Initialization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs#Generalization#Overfitting#Catastrophic Forgetting#Iterative Policy Optimization#Policy Diversity2025년 11월 10일댓글 수 로딩 중
[논문리뷰] RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model MergingarXiv에 게시된 'RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model Merging' 논문에 대한 자세한 리뷰입니다.#Review#Catastrophic Forgetting#Continual Learning#Model Merging#LLMs#Representation Learning#Data-free Learning#Hierarchical Parameter Fusion2025년 10월 27일댓글 수 로딩 중
[논문리뷰] KORE: Enhancing Knowledge Injection for Large Multimodal Models via Knowledge-Oriented Augmentations and ConstraintsJinhe Bi이 arXiv에 게시한 'KORE: Enhancing Knowledge Injection for Large Multimodal Models via Knowledge-Oriented Augmentations and Constraints' 논문에 대한 자세한 리뷰입니다.#Review#Knowledge Injection#Large Multimodal Models#Catastrophic Forgetting#Data Augmentation#Parameter-Efficient Fine-Tuning#Null Space#Continual Learning2025년 10월 23일댓글 수 로딩 중
[논문리뷰] Fine-tuning Done Right in Model EditingDu Su이 arXiv에 게시한 'Fine-tuning Done Right in Model Editing' 논문에 대한 자세한 리뷰입니다.#Review#Model Editing#Fine-tuning#Large Language Models#Catastrophic Forgetting#Breadth-First Pipeline#Depth-First Pipeline#Localized Tuning#Lifelong Learning2025년 9월 29일댓글 수 로딩 중
[논문리뷰] The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable RewardXiaoyu Tan이 arXiv에 게시한 'The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models (LLMs)#Diversity Collapse#f-divergence#Forward-KL#JS-divergence#Pass@k#Catastrophic Forgetting2025년 9월 12일댓글 수 로딩 중
[논문리뷰] Provable Benefits of In-Tool Learning for Large Language ModelsVivien Cabannes이 arXiv에 게시한 'Provable Benefits of In-Tool Learning for Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#In-Tool Learning#In-Weight Learning#Factual Recall#Retrieval-Augmented Generation#Scaling Laws#Parameter Efficiency#Catastrophic Forgetting2025년 8월 29일댓글 수 로딩 중
[논문리뷰] GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples ReplayYang Fan이 arXiv에 게시한 'GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay' 논문에 대한 자세한 리뷰입니다.#Review#Continual Learning#Large Language Models (LLMs)#Catastrophic Forgetting#Replay#Knowledge Distillation#Activation States#Anti-forgetting#Threshold-based Margin Loss2025년 8월 13일댓글 수 로딩 중
[논문리뷰] AlignGuard-LoRA: Alignment-Preserving Fine-Tuning via Fisher-Guided Decomposition and Riemannian-Geodesic Collision RegularizationAman Chadha이 arXiv에 게시한 'AlignGuard-LoRA: Alignment-Preserving Fine-Tuning via Fisher-Guided Decomposition and Riemannian-Geodesic Collision Regularization' 논문에 대한 자세한 리뷰입니다.#Review#Alignment Preservation#Fine-Tuning#LoRA#Fisher Information Matrix#Catastrophic Forgetting#LLM Safety#Riemannian Geometry#Parameter-Efficient Learning2025년 8월 6일댓글 수 로딩 중
[논문리뷰] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to ManipulationYang Tian이 arXiv에 게시한 'InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Instruction Tuning#Multimodal Reasoning#Robotic Manipulation#Catastrophic Forgetting#Mixture-of-Experts (MoE)#Flow Matching2025년 8월 5일댓글 수 로딩 중