[논문리뷰] Embarrassingly Simple Self-Distillation Improves Code GenerationRonan Collobert이 arXiv에 게시한 'Embarrassingly Simple Self-Distillation Improves Code Generation' 논문에 대한 자세한 리뷰입니다.#Review#Self-Distillation#Code Generation#Large Language Models#Precision-Exploration Conflict#Supervised Fine-Tuning#Temperature Scaling#Truncation2026년 4월 1일댓글 수 로딩 중
[논문리뷰] Density-aware Soft Context Compression with Semi-Dynamic Compression RatioJi Pei이 arXiv에 게시한 'Density-aware Soft Context Compression with Semi-Dynamic Compression Ratio' 논문에 대한 자세한 리뷰입니다.#Review#Soft Context Compression#Large Language Models#Density-aware#Discrete Ratio Selector#Supervised Fine-Tuning#Mean-Pooling2026년 3월 30일댓글 수 로딩 중
[논문리뷰] DLLM-Searcher: Adapting Diffusion Large Language Model for Search AgentsarXiv에 게시된 'DLLM-Searcher: Adapting Diffusion Large Language Model for Search Agents' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Large Language Models#Search Agents#Latency Reduction#P-ReAct#Agentic Post-training#Supervised Fine-Tuning#Preference Optimization#Parallel Decoding2026년 2월 10일댓글 수 로딩 중
[논문리뷰] ProAct: Agentic Lookahead in Interactive EnvironmentsarXiv에 게시된 'ProAct: Agentic Lookahead in Interactive Environments' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Large Language Models#Reinforcement Learning#Lookahead Reasoning#Monte-Carlo Tree Search#Supervised Fine-Tuning#Value Estimation#Simulation Drift2026년 2월 5일댓글 수 로딩 중
[논문리뷰] SWE-Master: Unleashing the Potential of Software Engineering Agents via Post-TrainingarXiv에 게시된 'SWE-Master: Unleashing the Potential of Software Engineering Agents via Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#Software Engineering Agents#Post-Training#Supervised Fine-Tuning#Reinforcement Learning#Language Server Protocol#SWE-bench#Code Navigation#LLM2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical ReportarXiv에 게시된 'Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Cybersecurity LLM#Reasoning Model#Supervised Fine-Tuning#Reinforcement Learning#Verifiable Rewards#8B Parameters#Open-Source AI2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Language-based Trial and Error Falls Behind in the Era of ExperiencearXiv에 게시된 'Language-based Trial and Error Falls Behind in the Era of Experience' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Exploration Efficiency#Sub-Scale Collaboration#Out-of-Distribution Tasks#Agentic AI#Supervised Fine-Tuning2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Knowledge is Not Enough: Injecting RL Skills for Continual AdaptationarXiv에 게시된 'Knowledge is Not Enough: Injecting RL Skills for Continual Adaptation' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#Continual Adaptation#Reinforcement Learning#Supervised Fine-Tuning#Skill Transfer#Task Arithmetic#Tool Use2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic ReasoningarXiv에 게시된 'Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts#Mamba-Transformer#Agentic Reasoning#Long Context LLM#FP8 Quantization#Supervised Fine-Tuning#Reinforcement Learning2025년 12월 24일댓글 수 로딩 중
[논문리뷰] SkillFactory: Self-Distillation For Learning Cognitive BehaviorsManya Wadhwa이 arXiv에 게시한 'SkillFactory: Self-Distillation For Learning Cognitive Behaviors' 논문에 대한 자세한 리뷰입니다.#Review#Self-Distillation#Cognitive Skills#Reinforcement Learning#Supervised Fine-Tuning#Language Models#Reasoning#Verification#Retrying2025년 12월 3일댓글 수 로딩 중
[논문리뷰] C^2DLM: Causal Concept-Guided Diffusion Large Language ModelsXinpeng Dong이 arXiv에 게시한 'C^2DLM: Causal Concept-Guided Diffusion Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Large Language Models#Causality#Attention Mechanism#Reasoning#Natural Language Generation#Supervised Fine-Tuning#Concept-Guided2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Revisiting Generalization Across Difficulty Levels: It's Not So EasyarXiv에 게시된 'Revisiting Generalization Across Difficulty Levels: It's Not So Easy' 논문에 대한 자세한 리뷰입니다.#Review#LLM Generalization#Task Difficulty#Item Response Theory#Cross-Difficulty#Data Curation#Model Evaluation#Supervised Fine-Tuning2025년 11월 26일댓글 수 로딩 중
[논문리뷰] WebVIA: A Web-based Vision-Language Agentic Framework for Interactive and Verifiable UI-to-Code GenerationarXiv에 게시된 'WebVIA: A Web-based Vision-Language Agentic Framework for Interactive and Verifiable UI-to-Code Generation' 논문에 대한 자세한 리뷰입니다.#Review#UI-to-Code#Vision-Language Models#Agentic Framework#Interactive UI#Web Automation#Code Generation#UI Verification#Supervised Fine-Tuning2025년 11월 12일댓글 수 로딩 중
[논문리뷰] RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking ServicesZijie Meng이 arXiv에 게시한 'RedOne 2.0: Rethinking Domain-specific LLM Post-Training in Social Networking Services' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Domain Adaptation#Social Networking Services#Reinforcement Learning#Supervised Fine-Tuning#Catastrophic Forgetting#Data Efficiency2025년 11월 10일댓글 수 로딩 중
[논문리뷰] CritiCal: Can Critique Help LLM Uncertainty or Confidence Calibration?Baixuan Xu이 arXiv에 게시한 'CritiCal: Can Critique Help LLM Uncertainty or Confidence Calibration?' 논문에 대한 자세한 리뷰입니다.#Review#LLM Calibration#Confidence Calibration#Uncertainty Estimation#Critique Learning#Supervised Fine-Tuning#Natural Language Processing#Self-Critique2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Information-Preserving Reformulation of Reasoning Traces for AntidistillationarXiv에 게시된 'Information-Preserving Reformulation of Reasoning Traces for Antidistillation' 논문에 대한 자세한 리뷰입니다.#Review#Antidistillation#Reasoning Traces#Large Language Models#Knowledge Distillation#Information Preservation#Trace Reformulation#Supervised Fine-Tuning2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Watch and Learn: Learning to Use Computers from Online VideosOriana Riva이 arXiv에 게시한 'Watch and Learn: Learning to Use Computers from Online Videos' 논문에 대한 자세한 리뷰입니다.#Review#Computer Use Agents#Inverse Dynamics Model#UI Trajectories#Web Videos#In-Context Learning#Supervised Fine-Tuning#Large Language Models#OSWorld Benchmark2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical PerspectivearXiv에 게시된 'Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Planning#Policy Gradient#Q-learning#Supervised Fine-Tuning#Diversity Collapse#Reward Hacking2025년 10월 1일댓글 수 로딩 중
[논문리뷰] PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model ReasoningLingpeng Kong이 arXiv에 게시한 'PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Synthesis#Large Language Models#Reasoning#Expectation-Maximization#Self-Play#Supervised Fine-Tuning#Task Generation#Rationale Generation2025년 9월 29일댓글 수 로딩 중
[논문리뷰] VaseVQA: Multimodal Agent and Benchmark for Ancient Greek PotteryShiya Huang이 arXiv에 게시한 'VaseVQA: Multimodal Agent and Benchmark for Ancient Greek Pottery' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models#Visual Question Answering#Reinforcement Learning#Cultural Heritage#Ancient Greek Pottery#Supervised Fine-Tuning#Benchmark2025년 9월 23일댓글 수 로딩 중
[논문리뷰] WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep ResearchHouquan Zhou이 arXiv에 게시한 'WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep Research' 논문에 대한 자세한 리뷰입니다.#Review#Open-Ended Deep Research#LLM Agents#Dynamic Outline#Evidence Acquisition#Hierarchical Writing#Memory Bank#State-of-the-Art#Supervised Fine-Tuning2025년 9월 17일댓글 수 로딩 중
[논문리뷰] WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic Data and Scalable Reinforcement LearningHuifeng Yin이 arXiv에 게시한 'WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic Data and Scalable Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Web Agents#Reinforcement Learning#Synthetic Data#Knowledge Graphs#LLMs#Supervised Fine-Tuning#Sim-to-Real Transfer#Agentic AI2025년 9월 17일댓글 수 로딩 중
[논문리뷰] SearchInstruct: Enhancing Domain Adaptation via Retrieval-Based Instruction Dataset CreationHeshaam Faili이 arXiv에 게시한 'SearchInstruct: Enhancing Domain Adaptation via Retrieval-Based Instruction Dataset Creation' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Instruction Tuning#Domain Adaptation#Retrieval-Augmented Generation#Dataset Creation#Model Editing#Supervised Fine-Tuning2025년 9월 16일댓글 수 로딩 중
[논문리뷰] Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions?Yu Fu이 arXiv에 게시한 'Inverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions?' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#Instruction Following#Benchmark#Cognitive Inertia#Out-of-Distribution#Supervised Fine-Tuning#Evaluation#Robustness2025년 9월 5일댓글 수 로딩 중
[논문리뷰] On-Policy RL Meets Off-Policy Experts: Harmonizing Supervised Fine-Tuning and Reinforcement Learning via Dynamic WeightingGuoyin Wang이 arXiv에 게시한 'On-Policy RL Meets Off-Policy Experts: Harmonizing Supervised Fine-Tuning and Reinforcement Learning via Dynamic Weighting' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Supervised Fine-Tuning#On-Policy RL#Off-Policy Experts#Dynamic Weighting#LLM Alignment#Reasoning2025년 8월 21일댓글 수 로딩 중
[논문리뷰] Thyme: Think Beyond ImagesWei Chen이 arXiv에 게시한 'Thyme: Think Beyond Images' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Code Generation#Image Processing#Reinforcement Learning#Supervised Fine-Tuning#Visual Reasoning#Sandbox2025년 8월 18일댓글 수 로딩 중
[논문리뷰] Aryabhata: An exam-focused language model for JEE MathSandeep Varma이 arXiv에 게시한 'Aryabhata: An exam-focused language model for JEE Math' 논문에 대한 자세한 리뷰입니다.#Review#Language Model#Math Reasoning#JEE#Supervised Fine-Tuning#Reinforcement Learning#Model Merging#Chain-of-Thought#Curriculum Learning2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Reasoning Language Models for Root Cause Analysis in 5G Wireless NetworksHaozhe Zhang이 arXiv에 게시한 'Reasoning Language Models for Root Cause Analysis in 5G Wireless Networks' 논문에 대한 자세한 리뷰입니다.#Review#Root Cause Analysis#Large Language Models#5G Wireless Networks#Supervised Fine-Tuning#Reinforcement Learning#Chain-of-Thought#TeleLogs Dataset2025년 8월 7일댓글 수 로딩 중