[논문리뷰] UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual GenerationarXiv에 게시된 'UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual Generation' 논문에 대한 자세한 리뷰입니다.#Review#Unified Policy Optimization#Reinforcement Learning#Reasoning-Driven Generation#Interleaved Generation#Flow Matching#Markov Decision Process#Classifier-Free Guidance#Reward Hacking2026년 3월 24일댓글 수 로딩 중
[논문리뷰] Cooperation and Exploitation in LLM Policy Synthesis for Sequential Social Dilemmasvicgalle이 arXiv에 게시한 'Cooperation and Exploitation in LLM Policy Synthesis for Sequential Social Dilemmas' 논문에 대한 자세한 리뷰입니다.#Review#LLM Policy Synthesis#Sequential Social Dilemmas (SSDs)#Feedback Engineering#Multi-agent Environments#Cooperation#Reward Hacking#Programmatic Policies2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Astrolabe: Steering Forward-Process Reinforcement Learning for Distilled Autoregressive Video ModelsJie Huang이 arXiv에 게시한 'Astrolabe: Steering Forward-Process Reinforcement Learning for Distilled Autoregressive Video Models' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#Distilled Autoregressive Models#Reinforcement Learning (RL)#Human Preferences#Streaming Generation#Forward-Process RL#Reward Hacking#Temporal Consistency2026년 3월 22일댓글 수 로딩 중
[논문리뷰] Dr. Kernel: Reinforcement Learning Done Right for Triton Kernel GenerationsarXiv에 게시된 'Dr. Kernel: Reinforcement Learning Done Right for Triton Kernel Generations' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Kernel Generation#Triton#GPU Optimization#LLMs#Reward Hacking#Multi-turn Interaction#Code Generation2026년 2월 5일댓글 수 로딩 중
[논문리뷰] GARDO: Reinforcing Diffusion Models without Reward HackingZhiyong Wang이 arXiv에 게시한 'GARDO: Reinforcing Diffusion Models without Reward Hacking' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Reinforcement Learning#Reward Hacking#KL Regularization#Adaptive Regularization#Diversity Optimization#Text-to-Image Generation2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language ModelsarXiv에 게시된 'Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Adversarial Attack#Jailbreaking#Reward Hacking#Content Moderation Bypass#Cross-Model Transferability#Safety Vulnerabilities2025년 11월 23일댓글 수 로딩 중
[논문리뷰] ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test CasesNicholas Carlini이 arXiv에 게시한 'ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Reward Hacking#Benchmark Reliability#Test Exploitation#Prompt Engineering#LLM Safety#Code Generation2025년 10월 24일댓글 수 로딩 중
[논문리뷰] Towards Faithful and Controllable Personalization via Critique-Post-Edit Reinforcement LearningYuchen Eleanor Jiang이 arXiv에 게시한 'Towards Faithful and Controllable Personalization via Critique-Post-Edit Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Personalization#Reinforcement Learning#Generative Reward Model#Critique-Post-Edit#Reward Hacking#Controllable AI2025년 10월 22일댓글 수 로딩 중
[논문리뷰] Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical PerspectivearXiv에 게시된 'Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Planning#Policy Gradient#Q-learning#Supervised Fine-Tuning#Diversity Collapse#Reward Hacking2025년 10월 1일댓글 수 로딩 중
[논문리뷰] MOSS-ChatV: Reinforcement Learning with Process Reasoning Reward for Video Temporal ReasoningJunyan Zhang이 arXiv에 게시한 'MOSS-ChatV: Reinforcement Learning with Process Reasoning Reward for Video Temporal Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Video Temporal Reasoning#Reinforcement Learning#Process Supervision#Dynamic Time Warping#Multimodal Large Language Models#Video State Prediction#Reward Hacking2025년 9월 26일댓글 수 로딩 중
[논문리뷰] RewardDance: Reward Scaling in Visual GenerationLiang Li이 arXiv에 게시한 'RewardDance: Reward Scaling in Visual Generation' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Visual Generation#RLHF#VLM#Reward Scaling#Reward Hacking#Generative Paradigm#Context Scaling#Text-to-Image#Text-to-Video2025년 9월 11일댓글 수 로딩 중
[논문리뷰] Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human PreferenceYingfang Zhang이 arXiv에 게시한 'Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Reinforcement Learning#Human Preference#Text-to-Image Generation#Reward Hacking#Direct-Align#SRPO#Fine-Grained Control#Flow Matching Models2025년 9월 10일댓글 수 로딩 중
[논문리뷰] Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement LearningJiazi Bu이 arXiv에 게시한 'Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Text-to-Image Generation#GRPO#Reward Hacking#Pairwise Preference#Reward Model#Stable Optimization#UniGenBench2025년 8월 29일댓글 수 로딩 중
[논문리뷰] Cooper: Co-Optimizing Policy and Reward Models in Reinforcement Learning for Large Language ModelsGuiyang Hou이 arXiv에 게시한 'Cooper: Co-Optimizing Policy and Reward Models in Reinforcement Learning for Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Reward Model#Policy Optimization#Reward Hacking#Hybrid Annotation#Mathematical Reasoning#Verifiable Rewards2025년 8월 14일댓글 수 로딩 중
[논문리뷰] IFDECORATOR: Wrapping Instruction Following Reinforcement Learning with Verifiable RewardsLing-I Wu이 arXiv에 게시한 'IFDECORATOR: Wrapping Instruction Following Reinforcement Learning with Verifiable Rewards' 논문에 대한 자세한 리뷰입니다.#Review#Instruction Following#Reinforcement Learning#Reward Hacking#LLMs#Curriculum Learning#Data Flywheel#Verifiable Rewards2025년 8월 7일댓글 수 로딩 중