[논문리뷰] MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech TranslationHung-yi Lee이 arXiv에 게시한 'MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation' 논문에 대한 자세한 리뷰입니다.#Review#Speech-to-Speech Translation#Non-verbal Vocalizations#Mixture of Experts#AudioLLMs#Expressive Speech#Data Efficiency2026년 4월 21일댓글 수 로딩 중
[논문리뷰] On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language ModelsHaodong Lu이 arXiv에 게시한 'On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Continual Learning#Large Vision Language Models#Mixture of Experts#Routing-drift#Catastrophic Forgetting2026년 3월 30일댓글 수 로딩 중
[논문리뷰] SLER-IR: Spherical Layer-wise Expert Routing for All-in-One Image RestorationDizhe Zhang이 arXiv에 게시한 'SLER-IR: Spherical Layer-wise Expert Routing for All-in-One Image Restoration' 논문에 대한 자세한 리뷰입니다.#Review#Image Restoration#Mixture of Experts#Degradation Representation#Spherical Embedding#Contrastive Learning#Adaptive Routing#All-in-One Model#Global-Local Fusion2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Parallel Latent Reasoning for Sequential RecommendationYuning Jiang이 arXiv에 게시한 'Parallel Latent Reasoning for Sequential Recommendation' 논문에 대한 자세한 리뷰입니다.#Review#Sequential Recommendation#Latent Reasoning#Parallel Processing#Computational Scaling#Mixture of Experts#Contrastive Learning#Transformer Architecture2026년 1월 6일댓글 수 로딩 중
[논문리뷰] UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity MoEarXiv에 게시된 'UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity MoE' 논문에 대한 자세한 리뷰입니다.#Review#Mixture of Experts#Speech Generation#Music Generation#Multimodal AI#Dynamic Routing#Training Curriculum#Data Imbalance#Audio Synthesis2025년 10월 16일댓글 수 로딩 중
[논문리뷰] MoME: Mixture of Matryoshka Experts for Audio-Visual Speech RecognitionarXiv에 게시된 'MoME: Mixture of Matryoshka Experts for Audio-Visual Speech Recognition' 논문에 대한 자세한 리뷰입니다.#Review#Audio-Visual Speech Recognition#Mixture of Experts#Matryoshka Representation Learning#Large Language Models#Elastic Inference#Token Compression#Multimodal AI2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Mixture of Global and Local Experts with Diffusion Transformer for Controllable Face GenerationKai Li이 arXiv에 게시한 'Mixture of Global and Local Experts with Diffusion Transformer for Controllable Face Generation' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Transformer#Mixture of Experts#Controllable Generation#Face Generation#Multimodal Synthesis#Semantic Control#Image Generation2025년 9월 4일댓글 수 로딩 중
[논문리뷰] Omni-Effects: Unified and Spatially-Controllable Visual Effects GenerationXiaokun Feng이 arXiv에 게시한 'Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation' 논문에 대한 자세한 리뷰입니다.#Review#Visual Effects#Video Generation#LoRA#Mixture of Experts#Spatial Control#Diffusion Models#Multi-VFX2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate ExpertsTieyuan Chen이 arXiv에 게시한 'Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts' 논문에 대한 자세한 리뷰입니다.#Review#Mixture of Experts#LLMs#MoE Architecture#Dynamic Activation#Adjugate Experts#Upcycling Strategy#Load Balancing2025년 8월 12일댓글 수 로딩 중