[논문리뷰] Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized ParadigmarXiv에 게시된 'Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized Paradigm' 논문에 대한 자세한 리뷰입니다.#Review#Decentralized Training#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Memory Efficiency#Sparse Expert Synchronization#Federated Learning#Distributed GPUs2026년 2월 12일댓글 수 로딩 중
[논문리뷰] FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated LearningSimin Chen이 arXiv에 게시한 'FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning' 논문에 대한 자세한 리뷰입니다.#Review#Federated Learning#Model Heterogeneity#Representation Learning#Privacy Preservation#Communication Efficiency#Entangled Representation#Knowledge Transfer2025년 11월 30일댓글 수 로딩 중