[논문리뷰] From Sparse to Dense: Multi-View GRPO for Flow Models via Augmented Condition Spacelindahua이 arXiv에 게시한 'From Sparse to Dense: Multi-View GRPO for Flow Models via Augmented Condition Space' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#GRPO#Diffusion Models#Flow Models#Preference Alignment#Condition Enhancement#Multi-View Learning2026년 3월 15일댓글 수 로딩 중
[논문리뷰] DenseGRPO: From Sparse to Dense Reward for Flow Matching Model AlignmentarXiv에 게시된 'DenseGRPO: From Sparse to Dense Reward for Flow Matching Model Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Flow Matching Models#Dense Reward#Sparse Reward Problem#Preference Alignment#SDE Sampler#GRPO#Text-to-Image Generation2026년 2월 1일댓글 수 로딩 중
[논문리뷰] VIBE: Visual Instruction Based EditorBulat Suleimanov이 arXiv에 게시한 'VIBE: Visual Instruction Based Editor' 논문에 대한 자세한 리뷰입니다.#Review#Instruction-Based Image Editing#Diffusion Models#Vision-Language Models (VLM)#Model Efficiency#Multi-stage Training#Preference Alignment#Source Consistency2026년 1월 15일댓글 수 로딩 중
[논문리뷰] Are LLMs Vulnerable to Preference-Undermining Attacks (PUA)? A Factorial Analysis Methodology for Diagnosing the Trade-off between Preference Alignment and Real-World ValidityChi Zhang이 arXiv에 게시한 'Are LLMs Vulnerable to Preference-Undermining Attacks (PUA)? A Factorial Analysis Methodology for Diagnosing the Trade-off between Preference Alignment and Real-World Validity' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Preference Alignment#Preference-Undermining Attacks#Factorial Analysis#Sycophancy#Prompt Engineering#Truth-Deference Trade-off2026년 1월 14일댓글 수 로딩 중
[논문리뷰] Aligning Generative Music AI with Human Preferences: Methods and ChallengesAbhinaba Roy이 arXiv에 게시한 'Aligning Generative Music AI with Human Preferences: Methods and Challenges' 논문에 대한 자세한 리뷰입니다.#Review#Generative Music AI#Preference Alignment#Reinforcement Learning from Human Feedback (RLHF)#Direct Preference Optimization (DPO)#Inference-Time Optimization#Music Generation#Human-Computer Interaction2025년 11월 19일댓글 수 로딩 중
[논문리뷰] Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion ModelsZhao Xu이 arXiv에 게시한 'Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Direct Preference Optimization (DPO)#Safeguarded Learning#Text-to-Image Generation#Preference Alignment#Generative Models#Stable Diffusion2025년 11월 10일댓글 수 로딩 중
[논문리뷰] RL makes MLLMs see better than SFTarXiv에 게시된 'RL makes MLLMs see better than SFT' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Language Models#Reinforcement Learning#Supervised Finetuning#Vision Encoder#Visual Representations#Direct Preference Optimization#Preference Alignment#PIVOT2025년 10월 21일댓글 수 로딩 중
[논문리뷰] Margin Adaptive DPO: Leveraging Reward Model for Granular Control in Preference Optimizationsirano1004이 arXiv에 게시한 'Margin Adaptive DPO: Leveraging Reward Model for Granular Control in Preference Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Direct Preference Optimization#Preference Alignment#Adaptive Regularization#Reward Model#Large Language Models#Sentiment Generation2025년 10월 8일댓글 수 로딩 중
[논문리뷰] A Contextual Quality Reward Model for Reliable and Efficient Best-of-N Samplingsirano1004이 arXiv에 게시한 'A Contextual Quality Reward Model for Reliable and Efficient Best-of-N Sampling' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Best-of-N Sampling#Preference Alignment#Contextual Acceptability#Discrete Choice Model#Alignment Guardrail#Inference Accelerator2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Improving Large Vision and Language Models by Learning from a Panel of PeersSimon Jenni이 arXiv에 게시한 'Improving Large Vision and Language Models by Learning from a Panel of Peers' 논문에 대한 자세한 리뷰입니다.#Review#Large Vision and Language Models (LVLMs)#Self-Improvement#Peer Learning#Preference Alignment#Reward Modeling#Multimodal Learning#Knowledge Transfer2025년 9월 3일댓글 수 로딩 중
[논문리뷰] MotionFlux: Efficient Text-Guided Motion Generation through Rectified Flow Matching and Preference AlignmentAn-An Liu이 arXiv에 게시한 'MotionFlux: Efficient Text-Guided Motion Generation through Rectified Flow Matching and Preference Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Text-Guided Motion Generation#Rectified Flow Matching#Preference Alignment#Human Motion Synthesis#Real-time AI#Transformer Architecture#Self-supervised Learning2025년 8월 28일댓글 수 로딩 중