[논문리뷰] LK Losses: Direct Acceptance Rate Optimization for Speculative DecodingarXiv에 게시된 'LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding' 논문에 대한 자세한 리뷰입니다.#Review#Speculative Decoding#LLM Inference#Acceptance Rate#KL Divergence#Total Variation Distance#Loss Functions#Draft Model Training#Adaptive Learning2026년 3월 1일댓글 수 로딩 중
[논문리뷰] KLASS: KL-Guided Fast Inference in Masked Diffusion ModelsarXiv에 게시된 'KLASS: KL-Guided Fast Inference in Masked Diffusion Models' 논문에 대한 자세한 리뷰입니다.#Review#Masked Diffusion Models#Fast Inference#Adaptive Sampling#KL Divergence#Confidence Score#Generative AI#Efficient Sampling2025년 11월 11일댓글 수 로딩 중
[논문리뷰] Low-probability Tokens Sustain Exploration in Reinforcement Learning with Verifiable RewardarXiv에 게시된 'Low-probability Tokens Sustain Exploration in Reinforcement Learning with Verifiable Reward' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Exploration#Verifiable Reward#Low-Probability Regularization#Reasoning Sparks#Policy Entropy#KL Divergence#Mathematical Reasoning2025년 10월 10일댓글 수 로딩 중