[논문리뷰] DenseGRPO: From Sparse to Dense Reward for Flow Matching Model AlignmentarXiv에 게시된 'DenseGRPO: From Sparse to Dense Reward for Flow Matching Model Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Flow Matching Models#Dense Reward#Sparse Reward Problem#Preference Alignment#SDE Sampler#GRPO#Text-to-Image Generation2026년 2월 1일댓글 수 로딩 중
[논문리뷰] OmniX: From Unified Panoramic Generation and Perception to Graphics-Ready 3D ScenesarXiv에 게시된 'OmniX: From Unified Panoramic Generation and Perception to Graphics-Ready 3D Scenes' 논문에 대한 자세한 리뷰입니다.#Review#Panoramic Generation#Panoramic Perception#3D Scene Reconstruction#Graphics-Ready Scenes#Physically Based Rendering (PBR)#Flow Matching Models#Cross-Modal Adapters#Synthetic Dataset (PanoX)2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human PreferenceYingfang Zhang이 arXiv에 게시한 'Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Reinforcement Learning#Human Preference#Text-to-Image Generation#Reward Hacking#Direct-Align#SRPO#Fine-Grained Control#Flow Matching Models2025년 9월 10일댓글 수 로딩 중