[논문리뷰] Visual-ERM: Reward Modeling for Visual Equivalenceyuhangzang이 arXiv에 게시한 'Visual-ERM: Reward Modeling for Visual Equivalence' 논문에 대한 자세한 리뷰입니다.#Review#Reward Modeling#Vision-to-Code#Reinforcement Learning#Multimodal Generative Model#Visual Equivalence#Fine-grained Feedback#Test-Time Scaling2026년 3월 15일댓글 수 로딩 중
[논문리뷰] Video-Based Reward Modeling for Computer-Use AgentsarXiv에 게시된 'Video-Based Reward Modeling for Computer-Use Agents' 논문에 대한 자세한 리뷰입니다.#Review#Reward Modeling#Computer-Use Agents#Execution Video#Spatiotemporal Token Pruning#Dataset#Task Success2026년 3월 12일댓글 수 로딩 중
[논문리뷰] Trust Your Critic: Robust Reward Modeling and Reinforcement Learning for Faithful Image Editing and GenerationarXiv에 게시된 'Trust Your Critic: Robust Reward Modeling and Reinforcement Learning for Faithful Image Editing and Generation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Reward Modeling#Image Editing#Image Generation#MLLM#Data Curation#Fidelity#Instruction Following2026년 3월 12일댓글 수 로딩 중
[논문리뷰] Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene EditingarXiv에 게시된 'Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing' 논문에 대한 자세한 리뷰입니다.#Review#3D Scene Editing#Reinforcement Learning#Multi-view Consistency#Diffusion Models#Reward Modeling#3D Gaussian Splatting#FLUX-Kontext#VGGT2026년 3월 10일댓글 수 로딩 중
[논문리뷰] Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward ModelsarXiv에 게시된 'Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward Models' 논문에 대한 자세한 리뷰입니다.#Review#Generative Reward Models#Chain-of-Thought#Breadth-CoT#Depth-CoT#Reinforcement Learning#Reward Modeling#Mechanism Alignment2026년 3월 3일댓글 수 로딩 중
[논문리뷰] CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in ProductionarXiv에 게시된 'CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Social Chat#Engagement Optimization#Steerability#Reinforcement Learning#Reward Modeling#A/B Testing#Iterative Development2026년 3월 2일댓글 수 로딩 중
[논문리뷰] Enhancing Spatial Understanding in Image Generation via Reward ModelingarXiv에 게시된 'Enhancing Spatial Understanding in Image Generation via Reward Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Image Generation#Reward Modeling#Spatial Understanding#Reinforcement Learning#Visual Language Models#Text-to-Image#Preference Learning2026년 3월 1일댓글 수 로딩 중
[논문리뷰] TextPecker: Rewarding Structural Anomaly Quantification for Enhancing Visual Text RenderingHao Feng이 arXiv에 게시한 'TextPecker: Rewarding Structural Anomaly Quantification for Enhancing Visual Text Rendering' 논문에 대한 자세한 리뷰입니다.#Review#Visual Text Rendering#Reinforcement Learning#Structural Anomaly Perception#Reward Modeling#Text-to-Image Generation#OCR#MLLMs#Data Augmentation2026년 2월 24일댓글 수 로딩 중
[논문리뷰] TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for RoboticsarXiv에 게시된 'TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics' 논문에 대한 자세한 리뷰입니다.#Review#Robotics#Reward Modeling#Vision-Language Models#Zero-Shot Learning#Token Probabilities#Progress Estimation#Behavior Cloning#Manipulation2026년 2월 23일댓글 수 로딩 중
[논문리뷰] Learning Query-Specific Rubrics from Human Preferences for DeepResearch Report GenerationarXiv에 게시된 'Learning Query-Specific Rubrics from Human Preferences for DeepResearch Report Generation' 논문에 대한 자세한 리뷰입니다.#Review#DeepResearch#Rubric Generation#Human Preferences#Reinforcement Learning#Multi-agent Systems#LLM Evaluation#Reward Modeling2026년 2월 3일댓글 수 로딩 중
[논문리뷰] RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL SystemarXiv에 게시된 'RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Agentic AI#Reward Modeling#Environment Adaptation#Closed-loop Optimization#Multimodal Agents2026년 2월 2일댓글 수 로딩 중
[논문리뷰] PISCES: Annotation-free Text-to-Video Post-Training via Optimal Transport-Aligned RewardsarXiv에 게시된 'PISCES: Annotation-free Text-to-Video Post-Training via Optimal Transport-Aligned Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Video Generation#Post-Training#Optimal Transport#Reward Modeling#Annotation-free#Vision-Language Models#Diffusion Models2026년 2월 2일댓글 수 로딩 중
[논문리뷰] Exploring Reasoning Reward Model for AgentsZhixun Li이 arXiv에 게시한 'Exploring Reasoning Reward Model for Agents' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#Reward Modeling#Reasoning-aware Feedback#Large Language Models (LLMs)#Multi-modal Agents#Fine-tuning#Critique Generation2026년 1월 29일댓글 수 로딩 중
[논문리뷰] LSRIF: Logic-Structured Reinforcement Learning for Instruction FollowingarXiv에 게시된 'LSRIF: Logic-Structured Reinforcement Learning for Instruction Following' 논문에 대한 자세한 리뷰입니다.#Review#Instruction Following#Reinforcement Learning#Logical Structures#LLMs#Reward Modeling#Dataset Construction#Attention Mechanism2026년 1월 15일댓글 수 로딩 중
[논문리뷰] ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative RankingarXiv에 게시된 'ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative Ranking' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Agents#Open-Ended Tasks#Relative Ranking#Tournament-based Ranking#Discriminative Collapse#Reward Modeling#Benchmarks2026년 1월 13일댓글 수 로딩 중
[논문리뷰] ThinkRL-Edit: Thinking in Reinforcement Learning for Reasoning-Centric Image EditingarXiv에 게시된 'ThinkRL-Edit: Thinking in Reinforcement Learning for Reasoning-Centric Image Editing' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Image Editing#Reasoning#Chain-of-Thought#Multimodal Generative Models#Reward Modeling#VLM2026년 1월 7일댓글 수 로딩 중
[논문리뷰] T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and PlaygroundarXiv에 게시된 'T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground' 논문에 대한 자세한 리뷰입니다.#Review#Russian LLM#Hybrid Reasoning#Speculative Decoding#Cyrillic Tokenizer#Instruction Tuning#Reward Modeling#T-Math Benchmark2025년 12월 11일댓글 수 로딩 중
[논문리뷰] Are We Ready for RL in Text-to-3D Generation? A Progressive InvestigationarXiv에 게시된 'Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Text-to-3D Generation#Autoregressive Models#Reward Modeling#Hierarchical RL#3D Benchmarking#ShapeLLM-Omni2025년 12월 11일댓글 수 로딩 중
[논문리뷰] Self-Improving VLM Judges Without Human AnnotationsarXiv에 게시된 'Self-Improving VLM Judges Without Human Annotations' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Self-Improvement#Judge Models#Synthetic Data Generation#Iterative Refinement#Reward Modeling#Human-free Alignment2025년 12월 7일댓글 수 로딩 중
[논문리뷰] PRInTS: Reward Modeling for Long-Horizon Information SeekingElias Stengel-Eskin이 arXiv에 게시한 'PRInTS: Reward Modeling for Long-Horizon Information Seeking' 논문에 대한 자세한 리뷰입니다.#Review#Reward Modeling#Long-Horizon Tasks#Information Seeking#Large Language Models#Trajectory Summarization#Reinforcement Learning#Tool Use#Process Reward Models2025년 11월 24일댓글 수 로딩 중
[논문리뷰] FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable ReasoningXin Liu이 arXiv에 게시한 'FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reasoning#Policy Optimization#Reward Modeling#Flawed Reasoning#Reliable AI#Error Detection2025년 10월 30일댓글 수 로딩 중
[논문리뷰] Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form PreferencesarXiv에 게시된 'Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences' 논문에 대한 자세한 리뷰입니다.#Review#Reward Modeling#Multimodal AI#Human Preferences#RLHF#Generalist AI#Benchmark#Dataset#Free-Form Preferences2025년 10월 28일댓글 수 로딩 중
[논문리뷰] Uniworld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit FeedbackarXiv에 게시된 'Uniworld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit Feedback' 논문에 대한 자세한 리뷰입니다.#Review#Image Editing#Diffusion Models#Reinforcement Learning#MLLM#Policy Optimization#Finetuning#Reward Modeling#Human Alignment2025년 10월 21일댓글 수 로딩 중
[논문리뷰] LaSeR: Reinforcement Learning with Last-Token Self-RewardingarXiv에 게시된 'LaSeR: Reinforcement Learning with Last-Token Self-Rewarding' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM#Self-Verification#Last-Token#Reward Modeling#Efficiency#Reasoning#RLVR2025년 10월 17일댓글 수 로딩 중
[논문리뷰] Don't Waste Mistakes: Leveraging Negative RL-Groups via Confidence ReweightingJulia Kempe이 arXiv에 게시한 'Don't Waste Mistakes: Leveraging Negative RL-Groups via Confidence Reweighting' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reasoning Tasks#GRPO#Negative Samples#Reward Modeling#Confidence Reweighting#Mathematical Reasoning2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be DensearXiv에 게시된 'Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Reward Modeling#Large Language Models (LLMs)#Mathematical Reasoning#Sparse Rewards#Dense Rewards#Hybrid Reinforcement#Verifier-based Rewards2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Learning Human-Perceived Fakeness in AI-Generated Videos via Multimodal LLMsarXiv에 게시된 'Learning Human-Perceived Fakeness in AI-Generated Videos via Multimodal LLMs' 논문에 대한 자세한 리뷰입니다.#Review#AI-Generated Videos#Deepfake Detection#Multimodal LLMs#Human Perception#Video Generation Evaluation#Spatiotemporal Annotation#Reward Modeling2025년 10월 1일댓글 수 로딩 중
[논문리뷰] EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward ModelingarXiv에 게시된 'EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Image Editing#Reward Modeling#Instruction-Guided Editing#Online RL#Visual Language Models#Benchmark#Self-Ensembling2025년 9월 30일댓글 수 로딩 중
[논문리뷰] SPARK: Synergistic Policy And Reward Co-Evolving FrameworkarXiv에 게시된 'SPARK: Synergistic Policy And Reward Co-Evolving Framework' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs#LVLMs#Reward Modeling#Policy Optimization#Self-Reflection#Verifiable Rewards#Co-evolution2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-TrainingarXiv에 게시된 'Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Reinforcement Fine-tuning#Reward Modeling#Reward Over-optimization#Rubric-based Rewards#High-reward Tail#Off-policy Data#LLM Alignment2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning on Pre-Training DataEvander Yang이 arXiv에 게시한 'Reinforcement Learning on Pre-Training Data' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Pre-training#Large Language Models#Self-supervised Learning#Scaling Laws#Next-segment Reasoning#Reward Modeling2025년 9월 24일댓글 수 로딩 중
[논문리뷰] A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement LearningJiangmiao이 arXiv에 게시한 'A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Robotics#Reinforcement Learning (RL)#Vision-Language-Action (VLA) Models#Reward Modeling#Human-in-the-Loop#Dense Rewards#Generalization#Autoregressive Models2025년 9월 22일댓글 수 로딩 중
[논문리뷰] Language Self-Play For Data-Free TrainingVijai Mohan이 arXiv에 게시한 'Language Self-Play For Data-Free Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Self-Play#Data-Free Training#Instruction Following#Adversarial Training#Reward Modeling2025년 9월 10일댓글 수 로딩 중
[논문리뷰] Improving Large Vision and Language Models by Learning from a Panel of PeersSimon Jenni이 arXiv에 게시한 'Improving Large Vision and Language Models by Learning from a Panel of Peers' 논문에 대한 자세한 리뷰입니다.#Review#Large Vision and Language Models (LVLMs)#Self-Improvement#Peer Learning#Preference Alignment#Reward Modeling#Multimodal Learning#Knowledge Transfer2025년 9월 3일댓글 수 로딩 중
[논문리뷰] SSRL: Self-Search Reinforcement LearningYanxu Chen이 arXiv에 게시한 'SSRL: Self-Search Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Self-Search#Sim-to-Real Transfer#Agentic AI#Knowledge Retrieval#Reward Modeling2025년 8월 18일댓글 수 로딩 중
[논문리뷰] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait AnimationMu Xu이 arXiv에 게시한 'FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait Animation' 논문에 대한 자세한 리뷰입니다.#Review#Audio-Driven Animation#Preference Optimization#Diffusion Models#Reward Modeling#Human Feedback#Multi-Objective Optimization#Timestep-Layer Adaptive2025년 8월 18일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning in Vision: A SurveyQingwei Meng이 arXiv에 게시한 'Reinforcement Learning in Vision: A Survey' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Computer Vision (CV)#Multimodal Large Language Models (MLLMs)#Visual Generation#Vision-Language-Action (VLA) Models#Policy Optimization#Reward Modeling2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Beyond the Trade-off: Self-Supervised Reinforcement Learning for Reasoning Models' Instruction FollowingJiaqing Liang이 arXiv에 게시한 'Beyond the Trade-off: Self-Supervised Reinforcement Learning for Reasoning Models' Instruction Following' 논문에 대한 자세한 리뷰입니다.#Review#Self-Supervised RL#Instruction Following#Reasoning Models#Large Language Models#Reward Modeling#Curriculum Learning2025년 8월 5일댓글 수 로딩 중