[논문리뷰] Janus: Disaggregating Attention and Experts for Scalable MoE InferencearXiv에 게시된 'Janus: Disaggregating Attention and Experts for Scalable MoE Inference' 논문에 대한 자세한 리뷰입니다.#Review#MoE Inference#Disaggregated Architecture#Resource Management#Scalability#Load Balancing#GPU Utilization#Communication Optimization2025년 12월 16일댓글 수 로딩 중
[논문리뷰] Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM InferenceChunlei Han이 arXiv에 게시한 'Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM Inference' 논문에 대한 자세한 리뷰입니다.#Review#LLM Inference#Autoscaling#Disaggregated Architecture#Heterogeneous Hardware#Resource Management#Topology-aware Scheduling#GPU Utilization2025년 8월 28일댓글 수 로딩 중