[논문리뷰] CARE-Edit: Condition-Aware Routing of Experts for Contextual Image EditingDan Xu이 arXiv에 게시한 'CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Diffusion Models#Mixture-of-Experts (MoE)#Condition-Aware Routing#Contextual Image Editing#Mask Repaint#Latent Mixture#Diffusion Transformer2026년 3월 9일댓글 수 로딩 중
[논문리뷰] Timer-S1: A Billion-Scale Time Series Foundation Model with Serial ScalingarXiv에 게시된 'Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling' 논문에 대한 자세한 리뷰입니다.#Review#Time Series Forecasting#Foundation Model#Mixture-of-Experts (MoE)#Serial Scaling#Transformer#Pre-training#Probabilistic Forecasting#Data Augmentation2026년 3월 5일댓글 수 로딩 중
[논문리뷰] Qwen3-Coder-Next Technical ReportarXiv에 게시된 'Qwen3-Coder-Next Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Coding Agents#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Agentic Training#Software Engineering#Reinforcement Learning#Code Generation#Tool Usage2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Beyond Language Modeling: An Exploration of Multimodal PretrainingarXiv에 게시된 'Beyond Language Modeling: An Exploration of Multimodal Pretraining' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Pretraining#Vision-Language Models#Mixture-of-Experts (MoE)#Representation Autoencoders (RAE)#World Modeling#Scaling Laws#Diffusion Models#Unified Architectures2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized ParadigmarXiv에 게시된 'Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized Paradigm' 논문에 대한 자세한 리뷰입니다.#Review#Decentralized Training#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Memory Efficiency#Sparse Expert Synchronization#Federated Learning#Distributed GPUs2026년 2월 12일댓글 수 로딩 중
[논문리뷰] Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active ParametersarXiv에 게시된 'Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Sparse Models#Inference Efficiency#Hybrid Attention#Multi-Token Prediction (MTP)#Reinforcement Learning (RL)#Agentic AI#Long-Context Understanding2026년 2월 11일댓글 수 로딩 중
[논문리뷰] OmniMoE: An Efficient MoE by Orchestrating Atomic Experts at ScalearXiv에 게시된 'OmniMoE: An Efficient MoE by Orchestrating Atomic Experts at Scale' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Fine-Grained Experts#Efficient Architectures#Transformer#Routing Algorithms#Hardware Acceleration#Sparse Models2026년 2월 8일댓글 수 로딩 중
[논문리뷰] Scaling Embeddings Outperforms Scaling Experts in Language ModelsarXiv에 게시된 'Scaling Embeddings Outperforms Scaling Experts in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Embedding Scaling#N-gram Embedding#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Parameter Efficiency#Inference Optimization#Speculative Decoding2026년 1월 29일댓글 수 로딩 중
[논문리뷰] LongCat-Flash-Thinking-2601 Technical ReportarXiv에 게시된 'LongCat-Flash-Thinking-2601 Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Reinforcement Learning (RL)#Context Management#Scalable Training#Test-Time Reasoning#Open-Source Model2026년 1월 25일댓글 수 로딩 중
[논문리뷰] The Illusion of Specialization: Unveiling the Domain-Invariant 'Standing Committee' in Mixture-of-Experts ModelsarXiv에 게시된 'The Illusion of Specialization: Unveiling the Domain-Invariant 'Standing Committee' in Mixture-of-Experts Models' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Sparse Routing#Domain Specialization#Load Balancing#Interpretability#Standing Committee#LLM2026년 1월 8일댓글 수 로딩 중
[논문리뷰] K-EXAONE Technical ReportarXiv에 게시된 'K-EXAONE Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Language Model#Mixture-of-Experts (MoE)#Long Context#AI Safety#Korean AI#Foundation Model#Reinforcement Learning (RL)2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary LossarXiv에 게시된 'Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Router-Expert Coupling#Auxiliary Loss#Expert Specialization#Large Language Models (LLMs)#Computational Efficiency2025년 12월 29일댓글 수 로딩 중
[논문리뷰] Stabilizing Reinforcement Learning with LLMs: Formulation and PracticesarXiv에 게시된 'Stabilizing Reinforcement Learning with LLMs: Formulation and Practices' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Gradient#REINFORCE#Mixture-of-Experts (MoE)#Training Stability#Importance Sampling#Routing Replay#Off-policy Learning2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and DataarXiv에 게시된 'Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data' 논문에 대한 자세한 리뷰입니다.#Review#Omnimodal Large Models#Mixture-of-Experts (MoE)#Language-Centric AI#Multimodal Understanding#Multimodal Generation#Progressive Training#Omni-Modality 3D RoPE2025년 11월 17일댓글 수 로딩 중
[논문리뷰] Virtual Width NetworksarXiv에 게시된 'Virtual Width Networks' 논문에 대한 자세한 리뷰입니다.#Review#Virtual Width Networks#Transformer#Mixture-of-Experts (MoE)#Scaling Laws#Representation Learning#Model Efficiency#Multi-Token Prediction#Hyper-Connections2025년 11월 16일댓글 수 로딩 중
[논문리뷰] Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMsZiyue Li이 arXiv에 게시한 'Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Router Optimization#Manifold Regularization#Generalization#Post-training Fine-tuning#Task Embedding Alignment2025년 11월 10일댓글 수 로딩 중
[논문리뷰] LongCat-Flash-Omni Technical ReportBin Xiao이 arXiv에 게시한 'LongCat-Flash-Omni Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Omni-modal AI#Multimodal LLM#Real-time Interaction#Mixture-of-Experts (MoE)#Streaming Inference#Distributed Training#Curriculum Learning#Audio-Visual Perception2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Routing Matters in MoE: Scaling Diffusion Transformers with Explicit Routing GuidancearXiv에 게시된 'Routing Matters in MoE: Scaling Diffusion Transformers with Explicit Routing Guidance' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Diffusion Transformers (DiTs)#Routing Guidance#Semantic Specialization#Contrastive Learning#Image Generation#Flow Matching2025년 10월 29일댓글 수 로딩 중
[논문리뷰] Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert modelsShiwei Liu이 arXiv에 게시한 'Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Online Adaptation#Test-Time Adaptation (TTA)#Expert Routing#Large Language Models (LLMs)#Self-Supervision#Computational Efficiency#Context Shift Robustness2025년 10월 20일댓글 수 로딩 중
[논문리뷰] EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound IntelligenceQinghua Huang이 arXiv에 게시한 'EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound Intelligence' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Ultrasound Imaging#Medical Diagnosis#Mixture-of-Experts (MoE)#Instruction Tuning#Multimodal AI#Report Generation#VQA2025년 9월 19일댓글 수 로딩 중
[논문리뷰] Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning TasksDaisuke Nohara이 arXiv에 게시한 'Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Sparsity#Scaling Laws#Reasoning Tasks#Memorization#Large Language Models#Generalization Gap#Top-k Routing2025년 8월 27일댓글 수 로딩 중
[논문리뷰] Intern-S1: A Scientific Multimodal Foundation Modelxuhuang87이 arXiv에 게시한 'Intern-S1: A Scientific Multimodal Foundation Model' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Foundation Model#Scientific AI#Reinforcement Learning#Mixture-of-Experts (MoE)#Dynamic Tokenizer#Data Curation#Low-Resource Learning2025년 8월 22일댓글 수 로딩 중
[논문리뷰] MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMsJianguo Li이 arXiv에 게시한 'MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#LLM Compression#Matrix Decomposition#Parameter Efficiency#Deep Learning#Memory Optimization2025년 8월 12일댓글 수 로딩 중
[논문리뷰] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to ManipulationYang Tian이 arXiv에 게시한 'InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Instruction Tuning#Multimodal Reasoning#Robotic Manipulation#Catastrophic Forgetting#Mixture-of-Experts (MoE)#Flow Matching2025년 8월 5일댓글 수 로딩 중