[논문리뷰] MUSE: A Run-Centric Platform for Multimodal Unified Safety Evaluation of Large Language ModelsYiran Chen이 arXiv에 게시한 'MUSE: A Run-Centric Platform for Multimodal Unified Safety Evaluation of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Safety Evaluation#Red Teaming#Adversarial Attacks#Modality Switching#LLM Alignment#Compliance#ASR2026년 3월 4일댓글 수 로딩 중
[논문리뷰] References Improve LLM Alignment in Non-Verifiable DomainsarXiv에 게시된 'References Improve LLM Alignment in Non-Verifiable Domains' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reference-Guided Evaluation#Self-Improvement#Non-Verifiable Domains#Reinforcement Learning from Human Feedback (RLHF)#Direct Preference Optimization (DPO)2026년 2월 19일댓글 수 로딩 중
[논문리뷰] ClinAlign: Scaling Healthcare Alignment from Clinician PreferenceChaohe Zhang이 arXiv에 게시한 'ClinAlign: Scaling Healthcare Alignment from Clinician Preference' 논문에 대한 자세한 리뷰입니다.#Review#Healthcare AI#LLM Alignment#Clinician Preference#Rubric-based RLHF#Medical LLMs#Data Curation#HealthBench#Principle-based Supervision2026년 2월 17일댓글 수 로딩 중
[논문리뷰] SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference OptimizationarXiv에 게시된 'SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Preference Optimization#LLM Alignment#Direct Preference Optimization#Reference-Free#Likelihood Anchoring#Token Stabilization#Dual-Margin Loss#Unlearning2026년 2월 2일댓글 수 로딩 중
[논문리뷰] MOA: Multi-Objective Alignment for Role-Playing AgentsYongbin Li이 arXiv에 게시한 'MOA: Multi-Objective Alignment for Role-Playing Agents' 논문에 대한 자세한 리뷰입니다.#Review#Role-Playing Agents#Multi-Objective Reinforcement Learning#LLM Alignment#Persona Consistency#Dialogue Generation#Reward Shaping#Off-Policy Guidance2025년 12월 11일댓글 수 로딩 중
[논문리뷰] SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model AlignmentYi Yang이 arXiv에 게시한 'SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Stable Rank#Intrinsic Reward#Reinforcement Learning#Geometric Properties#Group Relative Policy Optimization#Annotation-Free Alignment2025년 12월 3일댓글 수 로딩 중
[논문리뷰] Value Drifts: Tracing Value Alignment During LLM Post-TrainingarXiv에 게시된 'Value Drifts: Tracing Value Alignment During LLM Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Value Drift#Supervised Fine-Tuning (SFT)#Preference Optimization#RLHF#Llama-3#Qwen-3#Human Values2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Every Question Has Its Own Value: Reinforcement Learning with Explicit Human ValuesarXiv에 게시된 'Every Question Has Its Own Value: Reinforcement Learning with Explicit Human Values' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Alignment#Human Values#Reward Shaping#Value-Weighted Reward#Termination Policy#RLVR2025년 10월 24일댓글 수 로딩 중
[논문리뷰] GTAlign: Game-Theoretic Alignment of LLM Assistants for Mutual WelfarearXiv에 게시된 'GTAlign: Game-Theoretic Alignment of LLM Assistants for Mutual Welfare' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#LLM Alignment#Game Theory#Reinforcement Learning#Mutual Welfare#Payoff Matrix#Strategic Decision Making#Human-AI Interaction2025년 10월 13일댓글 수 로딩 중
[논문리뷰] LongRM: Revealing and Unlocking the Context Boundary of Reward ModelingarXiv에 게시된 'LongRM: Revealing and Unlocking the Context Boundary of Reward Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Long Context#LLM Alignment#Multi-stage Training#Context Window Scaling#Preference Learning#Long-RewardBench2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Humanline: Online Alignment as Perceptual LossarXiv에 게시된 'Humanline: Online Alignment as Perceptual Loss' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Online RLHF#Offline RLHF#Prospect Theory#Perceptual Loss#Human-Centric AI#Reinforcement Learning2025년 10월 1일댓글 수 로딩 중
[논문리뷰] Multiplayer Nash Preference OptimizationarXiv에 게시된 'Multiplayer Nash Preference Optimization' 논문에 대한 자세한 리뷰입니다.#Review#RLHF#LLM Alignment#Nash Equilibrium#Multiplayer Games#Preference Optimization#Non-transitive Preferences#Game Theory2025년 9월 30일댓글 수 로딩 중
[논문리뷰] Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-TrainingarXiv에 게시된 'Chasing the Tail: Effective Rubric-based Reward Modeling for Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Reinforcement Fine-tuning#Reward Modeling#Reward Over-optimization#Rubric-based Rewards#High-reward Tail#Off-policy Data#LLM Alignment2025년 9월 29일댓글 수 로딩 중
[논문리뷰] SteeringControl: Holistic Evaluation of Alignment Steering in LLMsZhun Wang이 arXiv에 게시한 'SteeringControl: Holistic Evaluation of Alignment Steering in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Representation Steering#Benchmark#Behavioral Entanglement#Bias Mitigation#Harmful Generation#Hallucination Control#Modular Framework2025년 9월 18일댓글 수 로딩 중
[논문리뷰] Learning to Optimize Multi-Objective Alignment Through Dynamic Reward WeightingChanglong Yu이 arXiv에 게시한 'Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting' 논문에 대한 자세한 리뷰입니다.#Review#Multi-objective Reinforcement Learning#LLM Alignment#Dynamic Reward Weighting#Pareto Front Optimization#Hypervolume Indicator#Gradient-based Optimization#Online RL2025년 9월 16일댓글 수 로딩 중
[논문리뷰] IntrEx: A Dataset for Modeling Engagement in Educational ConversationsGabriele Pergola이 arXiv에 게시한 'IntrEx: A Dataset for Modeling Engagement in Educational Conversations' 논문에 대한 자세한 리뷰입니다.#Review#Educational Dialogue#Engagement Modeling#Dataset Annotation#Second Language Learning#Human Feedback#LLM Alignment#Readability Metrics2025년 9월 15일댓글 수 로딩 중
[논문리뷰] On-Policy RL Meets Off-Policy Experts: Harmonizing Supervised Fine-Tuning and Reinforcement Learning via Dynamic WeightingGuoyin Wang이 arXiv에 게시한 'On-Policy RL Meets Off-Policy Experts: Harmonizing Supervised Fine-Tuning and Reinforcement Learning via Dynamic Weighting' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Supervised Fine-Tuning#On-Policy RL#Off-Policy Experts#Dynamic Weighting#LLM Alignment#Reasoning2025년 8월 21일댓글 수 로딩 중
[논문리뷰] Learning to Align, Aligning to Learn: A Unified Approach for Self-Optimized AlignmentLei Fan이 arXiv에 게시한 'Learning to Align, Aligning to Learn: A Unified Approach for Self-Optimized Alignment' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reinforcement Learning from Human Feedback#Preference Learning#Group Relative Alignment Optimization#Self-Optimization#Mixture-of-Experts#Imitation Learning2025년 8월 14일댓글 수 로딩 중
[논문리뷰] Temporal Self-Rewarding Language Models: Decoupling Chosen-Rejected via Past-FutureQiufeng Wang이 arXiv에 게시한 'Temporal Self-Rewarding Language Models: Decoupling Chosen-Rejected via Past-Future' 논문에 대한 자세한 리뷰입니다.#Review#Self-Rewarding LLMs#Direct Preference Optimization (DPO)#Preference Learning#Generative AI#Gradient Collapse#LLM Alignment#Iterative Optimization2025년 8월 12일댓글 수 로딩 중
[논문리뷰] InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning CapabilitiesZhijie Sang이 arXiv에 게시한 'InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Reasoning#Data Curation#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#Sample Efficiency#Scalability#Multi-dimensional Filtering2025년 8월 8일댓글 수 로딩 중
[논문리뷰] TRACEALIGN -- Tracing the Drift: Attributing Alignment Failures to Training-Time Belief Sources in LLMsAman Chadha이 arXiv에 게시한 'TRACEALIGN -- Tracing the Drift: Attributing Alignment Failures to Training-Time Belief Sources in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Alignment#Alignment Drift#Training Data Provenance#Belief Conflict Index (BCI)#Suffix Array#Safety Interventions#Reinforcement Learning from Human Feedback#Explainable AI2025년 8월 6일댓글 수 로딩 중