[논문리뷰] SViM3D: Stable Video Material Diffusion for Single Image 3D GenerationarXiv에 게시된 'SViM3D: Stable Video Material Diffusion for Single Image 3D Generation' 논문에 대한 자세한 리뷰입니다.#Review#Single Image 3D Reconstruction#Material Prediction#Video Diffusion Models#Physically Based Rendering (PBR)#Inverse Rendering#Novel View Synthesis#Camera Control#Latent Diffusion2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Reinforcing Diffusion Models by Direct Group Preference OptimizationJing Tang이 arXiv에 게시한 'Reinforcing Diffusion Models by Direct Group Preference Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Reinforcement Learning#Preference Optimization#Group Preference#Direct Preference Optimization#ODE Samplers#Efficient Training2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-TrainingPeng Cheng이 arXiv에 게시한 'Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts#Large Language Models#Checkpoint Recycling#Model Growth#Efficient Pretraining#Depth Growth#Width Growth#Sunk Cost2025년 10월 10일댓글 수 로딩 중
[논문리뷰] R2RGEN: Real-to-Real 3D Data Generation for Spatially Generalized ManipulationZheng Zhu이 arXiv에 게시한 'R2RGEN: Real-to-Real 3D Data Generation for Spatially Generalized Manipulation' 논문에 대한 자세한 리뷰입니다.#Review#Robotic Manipulation#Data Augmentation#Spatial Generalization#3D Data Generation#Imitation Learning#Point Cloud#Real-to-Real#Mobile Manipulation2025년 10월 10일댓글 수 로딩 중
[논문리뷰] NewtonBench: Benchmarking Generalizable Scientific Law Discovery in LLM AgentsBaixuan Xu이 arXiv에 게시한 'NewtonBench: Benchmarking Generalizable Scientific Law Discovery in LLM Agents' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Scientific Law Discovery#Benchmarking#Metaphysical Shifts#Interactive Environments#Exploration-Exploitation#Tool Use2025년 10월 10일댓글 수 로딩 중
[논문리뷰] NaViL: Rethinking Scaling Properties of Native Multimodal Large Language Models under Data ConstraintsarXiv에 게시된 'NaViL: Rethinking Scaling Properties of Native Multimodal Large Language Models under Data Constraints' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models#Native MLLMs#Scaling Laws#Data Constraints#Visual Encoder#LLM Initialization#Mixture-of-Experts#End-to-end Training2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement LearningarXiv에 게시된 'Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Meta-Awareness#Reinforcement Learning#Self-Alignment#LLM Reasoning#Training Efficiency#Generalization#Predictive Gating2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Memory Retrieval and Consolidation in Large Language Models through Function TokensarXiv에 게시된 'Memory Retrieval and Consolidation in Large Language Models through Function Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#LLM Interpretability#Function Tokens#Memory Retrieval#Memory Consolidation#Sparse Autoencoders#Pre-training2025년 10월 10일댓글 수 로딩 중
[논문리뷰] MemMamba: Rethinking Memory Patterns in State Space ModelXiao Sun이 arXiv에 게시한 'MemMamba: Rethinking Memory Patterns in State Space Model' 논문에 대한 자세한 리뷰입니다.#Review#State Space Models#Mamba#Long-sequence modeling#Memory decay#State summarization#Cross-layer attention#Perplexity#Linear complexity2025년 10월 10일댓글 수 로딩 중
[논문리뷰] MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimizationvanilla1116이 arXiv에 게시한 'MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal LLMs#Reflective Reasoning#Long-Chain Reasoning#Benchmark#Policy Optimization#Data Generation#Reinforcement Learning#Backtracking2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Low-probability Tokens Sustain Exploration in Reinforcement Learning with Verifiable RewardarXiv에 게시된 'Low-probability Tokens Sustain Exploration in Reinforcement Learning with Verifiable Reward' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLM Exploration#Verifiable Reward#Low-Probability Regularization#Reasoning Sparks#Policy Entropy#KL Divergence#Mathematical Reasoning2025년 10월 10일댓글 수 로딩 중
[논문리뷰] LongRM: Revealing and Unlocking the Context Boundary of Reward ModelingarXiv에 게시된 'LongRM: Revealing and Unlocking the Context Boundary of Reward Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Reward Model#Long Context#LLM Alignment#Multi-stage Training#Context Window Scaling#Preference Learning#Long-RewardBench2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offsFranck Dernoncourt이 arXiv에 게시한 'Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Routing#Contextual Bandits#Bandit Feedback#Multi-objective Optimization#Preference-tuning#Policy Gradient#Cost-efficiency2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Learning on the Job: An Experience-Driven Self-Evolving Agent for Long-Horizon TasksarXiv에 게시된 'Learning on the Job: An Experience-Driven Self-Evolving Agent for Long-Horizon Tasks' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agents#Continuous Learning#Self-Evolving#Memory Module#Long-Horizon Planning#Productivity Tasks#Test-Time Learning#Experience Replay2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Large Scale Diffusion Distillation via Score-Regularized Continuous-Time ConsistencyJintao Zhang이 arXiv에 게시한 'Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Distillation#Consistency Models#Score Regularization#Large-Scale Generative Models#Text-to-Image#Text-to-Video#Model Acceleration#JVP2025년 10월 10일댓글 수 로딩 중
[논문리뷰] LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI InteractionsarXiv에 게시된 'LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions' 논문에 대한 자세한 리뷰입니다.#Review#LLM Misalignment#Dishonesty#Deception#Finetuning#Human-AI Interaction#Biased Feedback#Emergent Behavior2025년 10월 10일댓글 수 로딩 중
[논문리뷰] InstructX: Towards Unified Visual Editing with MLLM GuidanceXinghui Li이 arXiv에 게시한 'InstructX: Towards Unified Visual Editing with MLLM Guidance' 논문에 대한 자세한 리뷰입니다.#Review#Visual Editing#MLLM Guidance#Diffusion Models#Image Editing#Video Editing#Unified Framework#Multimodal AI#Instruction-based Editing2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be DensearXiv에 게시된 'Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Reward Modeling#Large Language Models (LLMs)#Mathematical Reasoning#Sparse Rewards#Dense Rewards#Hybrid Reinforcement#Verifier-based Rewards2025년 10월 10일댓글 수 로딩 중
[논문리뷰] GCPO: When Contrast Fails, Go GoldarXiv에 게시된 'GCPO: When Contrast Fails, Go Gold' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#LLMs Reasoning#Policy Optimization#Contrastive Learning#Chain of Thought#Reference Answers#Math Reasoning#Gold-Standard Answer2025년 10월 10일댓글 수 로딩 중
[논문리뷰] From What to Why: A Multi-Agent System for Evidence-based Chemical Reaction Condition ReasoningFeiwei Qin이 arXiv에 게시한 'From What to Why: A Multi-Agent System for Evidence-based Chemical Reaction Condition Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent System#Chemical Reaction Prediction#Explainable AI#Evidence-Based Reasoning#Large Language Models#Tool-Augmented LLMs#Scientific Discovery2025년 10월 10일댓글 수 로딩 중