[논문리뷰] Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language FoundationarXiv에 게시된 'Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language Foundation' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Mixture-of-Experts#Reasoning Capability#Sparse Activation#Scaling Laws#FP8 Training#Efficient Training#Instruction Tuning2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Every Attention Matters: An Efficient Hybrid Architecture for Long-Context ReasoningarXiv에 게시된 'Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Long-Context LLM#Hybrid Attention#Linear Attention#Mixture-of-Experts#FP8 Training#GPU Optimization#Training-Inference Alignment#Reinforcement Learning2025년 10월 23일댓글 수 로딩 중
[논문리뷰] Metis: Training Large Language Models with Advanced Low-Bit QuantizationHengjie Cao이 arXiv에 게시한 'Metis: Training Large Language Models with Advanced Low-Bit Quantization' 논문에 대한 자세한 리뷰입니다.#Review#Low-Bit Quantization#LLMs#Spectral Decomposition#Anisotropy#Adaptive Learning Rate#Regularization#FP8 Training#FP4 Training2025년 9월 3일댓글 수 로딩 중
[논문리뷰] NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Modelabercovich이 arXiv에 게시한 'NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model' 논문에 대한 자세한 리뷰입니다.#Review#Hybrid Architecture#Mamba-Transformer#Reasoning LLM#Model Compression#Knowledge Distillation#Long Context#High Throughput#FP8 Training#Instruction Following2025년 8월 21일댓글 수 로딩 중