[논문리뷰] RbtAct: Rebuttal as Supervision for Actionable Review Feedback GenerationarXiv에 게시된 'RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation' 논문에 대한 자세한 리뷰입니다.#Review#Peer Review#Rebuttal#Actionable Feedback#Large Language Models (LLMs)#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#RMR-75K Dataset#Review Feedback Generation2026년 3월 11일댓글 수 로딩 중
[논문리뷰] Surgical Post-Training: Cutting Errors, Keeping KnowledgearXiv에 게시된 'Surgical Post-Training: Cutting Errors, Keeping Knowledge' 논문에 대한 자세한 리뷰입니다.#Review#LLM Post-Training#Catastrophic Forgetting#Direct Preference Optimization (DPO)#Reward-based Learning#Data Rectification#Binary Cross-Entropy#Reasoning Tasks#Knowledge Preservation2026년 3월 3일댓글 수 로딩 중
[논문리뷰] References Improve LLM Alignment in Non-Verifiable DomainsarXiv에 게시된 'References Improve LLM Alignment in Non-Verifiable Domains' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reference-Guided Evaluation#Self-Improvement#Non-Verifiable Domains#Reinforcement Learning from Human Feedback (RLHF)#Direct Preference Optimization (DPO)2026년 2월 19일댓글 수 로딩 중
[논문리뷰] Unified Personalized Reward Model for Vision GenerationarXiv에 게시된 'Unified Personalized Reward Model for Vision Generation' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Vision Generation#Personalized Learning#Context-Adaptive Reasoning#Direct Preference Optimization (DPO)#Reinforcement Learning (RL)#Multimodal Learning#Group Relative Policy Optimization (GRPO)2026년 2월 3일댓글 수 로딩 중
[논문리뷰] DreaMontage: Arbitrary Frame-Guided One-Shot Video GenerationarXiv에 게시된 'DreaMontage: Arbitrary Frame-Guided One-Shot Video Generation' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#One-Shot Video#Diffusion Transformer (DiT)#Frame-Guided Generation#Auto-Regressive Generation#Supervised Fine-Tuning (SFT)#Direct Preference Optimization (DPO)2025년 12월 24일댓글 수 로딩 중
[논문리뷰] Aligning Generative Music AI with Human Preferences: Methods and ChallengesAbhinaba Roy이 arXiv에 게시한 'Aligning Generative Music AI with Human Preferences: Methods and Challenges' 논문에 대한 자세한 리뷰입니다.#Review#Generative Music AI#Preference Alignment#Reinforcement Learning from Human Feedback (RLHF)#Direct Preference Optimization (DPO)#Inference-Time Optimization#Music Generation#Human-Computer Interaction2025년 11월 19일댓글 수 로딩 중
[논문리뷰] Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion ModelsZhao Xu이 arXiv에 게시한 'Diffusion-SDPO: Safeguarded Direct Preference Optimization for Diffusion Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Direct Preference Optimization (DPO)#Safeguarded Learning#Text-to-Image Generation#Preference Alignment#Generative Models#Stable Diffusion2025년 11월 10일댓글 수 로딩 중
[논문리뷰] Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PDRoy Ka-Wei Lee이 arXiv에 게시한 'Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PD' 논문에 대한 자세한 리뷰입니다.#Review#Persuasion Dynamics#Large Language Models (LLMs)#Robustness#Gullibility#Receptiveness#Direct Preference Optimization (DPO)#Safety Alignment#Multi-turn Dialogue2025년 8월 29일댓글 수 로딩 중
[논문리뷰] Temporal Self-Rewarding Language Models: Decoupling Chosen-Rejected via Past-FutureQiufeng Wang이 arXiv에 게시한 'Temporal Self-Rewarding Language Models: Decoupling Chosen-Rejected via Past-Future' 논문에 대한 자세한 리뷰입니다.#Review#Self-Rewarding LLMs#Direct Preference Optimization (DPO)#Preference Learning#Generative AI#Gradient Collapse#LLM Alignment#Iterative Optimization2025년 8월 12일댓글 수 로딩 중
[논문리뷰] InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning CapabilitiesZhijie Sang이 arXiv에 게시한 'InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reasoning#Data Curation#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#Sample Efficiency#Scalability#Multi-dimensional Filtering2025년 8월 8일댓글 수 로딩 중
[논문리뷰] Are Today's LLMs Ready to Explain Well-Being Concepts?Huan Liu이 arXiv에 게시한 'Are Today's LLMs Ready to Explain Well-Being Concepts?' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Well-being Concepts#LLM Evaluation#Principle-Guided Evaluation#LLM-as-a-Judge#Supervised Fine-Tuning (SFT)#Direct Preference Optimization (DPO)#Explanation Generation2025년 8월 8일댓글 수 로딩 중