[논문리뷰] Video-CoE: Reinforcing Video Event Prediction via Chain of EventsarXiv에 게시된 'Video-CoE: Reinforcing Video Event Prediction via Chain of Events' 논문에 대한 자세한 리뷰입니다.#Review#Video Event Prediction (VEP)#Multimodal Large Language Models (MLLMs)#Chain of Events (CoE)#Logical Reasoning#Visual Grounding#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)2026년 3월 18일댓글 수 로딩 중
[논문리뷰] Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware TrainingarXiv에 게시된 'Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training' 논문에 대한 자세한 리뷰입니다.#Review#Financial LLMs#Data-Centric AI#Distillation#Chain-of-Thought (CoT)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Difficulty-Aware Training#Data Quality2026년 3월 9일댓글 수 로딩 중
[논문리뷰] Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent InteractionZhengkang Guo이 arXiv에 게시한 'Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction' 논문에 대한 자세한 리뷰입니다.#Review#Long-Term Human-Agent Interaction#Controllable Memory#Memory Anchoring#Large Language Models (LLMs)#Personalization#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Memory Dependence2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate ForgettingarXiv에 게시된 'Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Catastrophic Forgetting#Entropy-Adaptive Fine-Tuning (EAFT)#Large Language Models (LLMs)#Domain Adaptation#Reinforcement Learning (RL)#Confident Conflicts2026년 1월 7일댓글 수 로딩 중
[논문리뷰] Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time ScalingarXiv에 게시된 'Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling' 논문에 대한 자세한 리뷰입니다.#Review#Reasoning#Small Language Models (SLMs)#Hybrid Architecture#Test-Time Scaling (TTS)#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#DeepConf#Computational Efficiency2026년 1월 5일댓글 수 로딩 중
[논문리뷰] DreaMontage: Arbitrary Frame-Guided One-Shot Video GenerationarXiv에 게시된 'DreaMontage: Arbitrary Frame-Guided One-Shot Video Generation' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#One-Shot Video#Diffusion Transformer (DiT)#Frame-Guided Generation#Auto-Regressive Generation#Supervised Fine-Tuning (SFT)#Direct Preference Optimization (DPO)2025년 12월 24일댓글 수 로딩 중
[논문리뷰] Robust-R1: Degradation-Aware Reasoning for Robust Visual UnderstandingRuntao Liu이 arXiv에 게시한 'Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models (MLLMs)#Visual Degradation#Robustness#Reasoning Chains#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Degradation-Aware Reasoning#Interpretability2025년 12월 21일댓글 수 로딩 중
[논문리뷰] Skyra: AI-Generated Video Detection via Grounded Artifact ReasoningarXiv에 게시된 'Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#AI-Generated Video Detection#Multimodal Large Language Model (MLLM)#Artifact Reasoning#Explainable AI#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Video Forensics2025년 12월 17일댓글 수 로딩 중
[논문리뷰] Toward Ambulatory Vision: Learning Visually-Grounded Active View SelectionarXiv에 게시된 'Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection' 논문에 대한 자세한 리뷰입니다.#Review#Active Perception#Vision-Language Models (VLMs)#Embodied AI#View Selection#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Visual Question Answering (VQA)#3D Environments2025년 12월 15일댓글 수 로딩 중
[논문리뷰] Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning GeneralizationarXiv에 게시된 'Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Vision-Language Models (VLMs)#Visual Reasoning#Generalization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Grounding CoT#Maze Solving2025년 12월 2일댓글 수 로딩 중
[논문리뷰] Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5BarXiv에 게시된 'Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B' 논문에 대한 자세한 리뷰입니다.#Review#Small Language Models#Reasoning#Diversity Optimization#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Spectrum-to-Signal Principle (SSP)#Mathematical Reasoning#Code Generation2025년 11월 11일댓글 수 로딩 중
[논문리뷰] Value Drifts: Tracing Value Alignment During LLM Post-TrainingarXiv에 게시된 'Value Drifts: Tracing Value Alignment During LLM Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Value Drift#Supervised Fine-Tuning (SFT)#Preference Optimization#RLHF#Llama-3#Qwen-3#Human Values2025년 11월 9일댓글 수 로딩 중
[논문리뷰] UI-Ins: Enhancing GUI Grounding with Multi-Perspective Instruction-as-ReasoningarXiv에 게시된 'UI-Ins: Enhancing GUI Grounding with Multi-Perspective Instruction-as-Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#GUI Grounding#Natural Language Instructions#Multi-Perspective Reasoning#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Policy Collapse Mitigation#GUI Agents2025년 10월 27일댓글 수 로딩 중
[논문리뷰] Distractor Injection Attacks on Large Reasoning Models: Characterization and DefensearXiv에 게시된 'Distractor Injection Attacks on Large Reasoning Models: Characterization and Defense' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models (LRMs)#Prompt Injection#Adversarial Attack#Reasoning Distraction#Chain-of-Thought#Robustness#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)2025년 10월 21일댓글 수 로딩 중
[논문리뷰] Apriel-1.5-15b-ThinkerarXiv에 게시된 'Apriel-1.5-15b-Thinker' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning Model#Open-Weights Model#Continual Pretraining (CPT)#Supervised Fine-Tuning (SFT)#Training Design#Efficiency#Frontier Performance2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post TrainingarXiv에 게시된 'Thinking Sparks!: Emergent Attention Heads in Reasoning Models During Post Training' 논문에 대한 자세한 리뷰입니다.#Review#Mechanistic Interpretability#Attention Heads#Post-Training#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Circuit Analysis#Reasoning Models#Transformer Architecture2025년 10월 1일댓글 수 로딩 중
[논문리뷰] ScaleDiff: Scaling Difficult Problems for Advanced Mathematical ReasoningYu Li이 arXiv에 게시한 'ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mathematical Reasoning#Large Reasoning Models (LRMs)#Difficulty Scaling#Data Augmentation#Supervised Fine-Tuning (SFT)#Problem Generation#Solution Distillation2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Logics-Parsing Technical ReportFan Yang이 arXiv에 게시한 'Logics-Parsing Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Document Parsing#Large Vision-Language Models (LVLM)#Reinforcement Learning (RL)#Layout Analysis#Reading Order#Supervised Fine-Tuning (SFT)#HTML Annotation#Benchmarking2025년 9월 25일댓글 수 로딩 중
[논문리뷰] Analyzing the Effects of Supervised Fine-Tuning on Model Knowledge from Token and Parameter LevelsQi Zhang이 arXiv에 게시한 'Analyzing the Effects of Supervised Fine-Tuning on Model Knowledge from Token and Parameter Levels' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Large Language Models (LLMs)#Model Knowledge#Closed-Book Question Answering (CBQA)#Parameter Restoration#Kullback-Leibler Divergence#Knowledge Forgetting2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Improving Context Fidelity via Native Retrieval-Augmented ReasoningXiangru Tang이 arXiv에 게시한 'Improving Context Fidelity via Native Retrieval-Augmented Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Context Fidelity#Retrieval-Augmented Generation (RAG)#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Hallucination#Question Answering#In-context Retrieval#Curriculum Learning2025년 9월 18일댓글 수 로딩 중
[논문리뷰] Towards a Unified View of Large Language Model Post-TrainingHongyi Liu이 arXiv에 게시한 'Towards a Unified View of Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Post-Training#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Policy Gradient#Unified Framework#Hybrid Algorithms#Bias-Variance Tradeoff2025년 9월 5일댓글 수 로딩 중
[논문리뷰] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward RectificationXinyu Ye이 arXiv에 게시한 'On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Reinforcement Learning (RL)#Generalization#Reward Rectification#Dynamic Fine-Tuning (DFT)#LLM#Policy Gradient#Mathematical Reasoning2025년 8월 8일댓글 수 로딩 중
[논문리뷰] Are Today's LLMs Ready to Explain Well-Being Concepts?Huan Liu이 arXiv에 게시한 'Are Today's LLMs Ready to Explain Well-Being Concepts?' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Well-being Concepts#LLM Evaluation#Principle-Guided Evaluation#LLM-as-a-Judge#Supervised Fine-Tuning (SFT)#Direct Preference Optimization (DPO)#Explanation Generation2025년 8월 8일댓글 수 로딩 중