[논문리뷰] Nemotron-Cascade 2: Post-Training LLMs with Cascade RL and Multi-Domain On-Policy DistillationarXiv에 게시된 'Nemotron-Cascade 2: Post-Training LLMs with Cascade RL and Multi-Domain On-Policy Distillation' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Cascade RL#Multi-Domain On-Policy Distillation#Mixture-of-Experts#Reasoning#Agentic Capabilities#Competitive Programming#Mathematical Olympiad2026년 3월 19일댓글 수 로딩 중
[논문리뷰] Surgical Post-Training: Cutting Errors, Keeping KnowledgearXiv에 게시된 'Surgical Post-Training: Cutting Errors, Keeping Knowledge' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Catastrophic Forgetting#Direct Preference Optimization (DPO)#Reward-based Learning#Data Rectification#Binary Cross-Entropy#Reasoning Tasks#Knowledge Preservation2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Revisiting Parameter Server in LLM Post-TrainingarXiv에 게시된 'Revisiting Parameter Server in LLM Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Parameter Server#Distributed Training#FSDP#On-Demand Communication#Workload Imbalance#Communication Optimization#Deep Learning2026년 1월 27일댓글 수 로딩 중
[논문리뷰] RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking ServicesZijie Meng이 arXiv에 게시한 'RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking Services' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Domain Adaptation#Social Networking Services#Reinforcement Learning#Supervised Fine-Tuning#Catastrophic Forgetting#Data Efficiency2025년 11월 10일댓글 수 로딩 중