[논문리뷰] RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action ModelsarXiv에 게시된 'RoboAlign: Learning Test-Time Reasoning for Language-Action Alignment in Vision-Language-Action Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action Models (VLAs)#Multimodal-Large-Language Models (MLLMs)#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Embodied Reasoning#Low-level Actions#FAST tokenization#Robotics2026년 3월 23일댓글 수 로딩 중
[논문리뷰] RbtAct: Rebuttal as Supervision for Actionable Review Feedback GenerationarXiv에 게시된 'RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation' 논문에 대한 자세한 리뷰입니다.#Review#Peer Review#Rebuttal#Actionable Feedback#Large Language Models (LLMs)#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#RMR-75K Dataset#Review Feedback Generation2026년 3월 11일댓글 수 로딩 중
[논문리뷰] On Data Engineering for Scaling LLM Terminal CapabilitiesarXiv에 게시된 'On Data Engineering for Scaling LLM Terminal Capabilities' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Terminal Agents#Data Engineering#Synthetic Data Generation#Supervised Fine-tuning (SFT)#Terminal-Bench#Nemotron-Terminal#Dataset Adapters2026년 2월 24일댓글 수 로딩 중
[논문리뷰] Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-TuningYuki M. Asano이 arXiv에 게시한 'Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-tuning (SFT)#Chain-of-Thought (CoT)#Data Repetition#Data Scaling#LLM Training#Generalization#Overfitting#Reasoning Models2026년 2월 11일댓글 수 로딩 중
[논문리뷰] X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and TestsJane Luo이 arXiv에 게시한 'X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests' 논문에 대한 자세한 리뷰입니다.#Review#Competitive Programming#Code LLMs#Synthetic Data Generation#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Dual Verification#Scaling Laws#SynthSmith2026년 1월 12일댓글 수 로딩 중
[논문리뷰] SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue ResolvingarXiv에 게시된 'SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue Resolving' 논문에 대한 자세한 리뷰입니다.#Review#Software Engineering#Issue Resolution#Supervised Fine-tuning (SFT)#Large Language Models (LLMs)#Hybrid Dataset#Error Masking#Curriculum Learning#Test-Time Scaling (TTS)#Generative Verifiers2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Skywork-R1V4: Toward Agentic Multimodal Intelligence through Interleaved Thinking with Images and DeepResearcharXiv에 게시된 'Skywork-R1V4: Toward Agentic Multimodal Intelligence through Interleaved Thinking with Images and DeepResearch' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal AI#Agentic Models#Interleaved Reasoning#Image Manipulation#DeepSearch#Supervised Fine-tuning (SFT)#Tool-Augmented LLM2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Monet: Reasoning in Latent Visual Space Beyond Images and LanguagePengfei Wan이 arXiv에 게시한 'Monet: Reasoning in Latent Visual Space Beyond Images and Language' 논문에 대한 자세한 리뷰입니다.#Review#Latent Visual Reasoning#Multimodal Large Language Models (MLLMs)#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Visual-latent Policy Optimization (VLPO)#Chain-of-Thought (CoT)#Abstract Visual Thinking2025년 11월 26일댓글 수 로딩 중
[논문리뷰] Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual GenerationXinyan Chen이 arXiv에 게시한 'Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation' 논문에 대한 자세한 리뷰입니다.#Review#Visual Generation#Textual Reasoning#Interleaving#Large Multimodal Models (LMMs)#Chain-of-Thought (CoT)#Zero-shot Learning#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)2025년 11월 20일댓글 수 로딩 중
[논문리뷰] ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and InterpretabilityTsui-Wei Weng이 arXiv에 게시한 'ReFIne: A Framework for Trustworthy Large Reasoning Models with Reliability, Faithfulness, and Interpretability' 논문에 대한 자세한 리뷰입니다.#Review#Trustworthy AI#Large Reasoning Models (LRMs)#Interpretability#Faithfulness#Reliability#Chain-of-Thought (CoT)#Supervised Fine-tuning (SFT)#GRPO2025년 10월 15일댓글 수 로딩 중
[논문리뷰] ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement LearningarXiv에 게시된 'ERA: Transforming VLMs into Embodied Agents via Embodied Prior Learning and Online Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Vision Language Models (VLMs)#Reinforcement Learning (RL)#Prior Learning#Supervised Fine-tuning (SFT)#Embodied Agents2025년 10월 15일댓글 수 로딩 중
[논문리뷰] First Try Matters: Revisiting the Role of Reflection in Reasoning ModelsWee Sun Lee이 arXiv에 게시한 'First Try Matters: Revisiting the Role of Reflection in Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reasoning#Chain-of-Thought (CoT)#Reflection#Early Stopping#Supervised Fine-tuning (SFT)#Token Efficiency#Mathematical Reasoning2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-ThoughtarXiv에 게시된 'Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Reasoning#Chain-of-Thought (CoT)#Language-Mixed CoT#Instruction Tuning#Korean LLMs#Data Curation#Supervised Fine-tuning (SFT)2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Modelszeliang0426이 arXiv에 게시한 'Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Models' 논문에 대한 자세한 리뷰입니다.#Review#Video Reasoning#Large Multimodal Models (LMMs)#Post-training#Supervised Fine-tuning (SFT)#Reinforcement Learning (RL)#Test-Time Scaling (TTS)#Chain-of-Thought (CoT)2025년 10월 7일댓글 수 로딩 중
[논문리뷰] A Practitioner's Guide to Multi-turn Agentic Reinforcement LearningarXiv에 게시된 'A Practitioner's Guide to Multi-turn Agentic Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Reinforcement Learning#LLM Agents#Text-based Environments#Reward Shaping#Policy Optimization#Supervised Fine-tuning (SFT)#Generalization#Environment Complexity2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability ContinuumHanghang Tong이 arXiv에 게시한 'Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-tuning (SFT)#Large Language Models (LLMs)#Training Objectives#Negative Log Likelihood (NLL)#Model Capability Continuum#Generalization#Probability-based Loss Functions2025년 10월 2일댓글 수 로딩 중
[논문리뷰] WebExplorer: Explore and Evolve for Training Long-Horizon Web AgentsAili Chen이 arXiv에 게시한 'WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Web Agents#Long-Horizon Reasoning#Large Language Models (LLMs)#Data Generation#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Web Navigation#Information Retrieval2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for InformationXi Yang이 arXiv에 게시한 'Beyond Solving Math Quiz: Evaluating the Ability of Large Reasoning Models to Ask for Information' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Information Seeking#Incomplete Problems#Mathematical Reasoning#Supervised Fine-tuning (SFT)#Overthinking#Hallucination#CRITIC-math2025년 8월 19일댓글 수 로딩 중
[논문리뷰] InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning CapabilitiesZhijie Sang이 arXiv에 게시한 'InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reasoning#Data Curation#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#Sample Efficiency#Scalability#Multi-dimensional Filtering2025년 8월 8일댓글 수 로딩 중
[논문리뷰] Tool-integrated Reinforcement Learning for Repo Deep SearchYanzhen Zou이 arXiv에 게시한 'Tool-integrated Reinforcement Learning for Repo Deep Search' 논문에 대한 자세한 리뷰입니다.#Review#Issue Localization#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Tool-integrated Agents#Software Engineering#Code Search2025년 8월 6일댓글 수 로딩 중