[논문리뷰] AR-Omni: A Unified Autoregressive Model for Any-to-Any GenerationarXiv에 게시된 'AR-Omni: A Unified Autoregressive Model for Any-to-Any Generation' 논문에 대한 자세한 리뷰입니다.#Review#Autoregressive Models#Multimodal AI#Any-to-Any Generation#Unified Model#Speech Generation#Image Generation#Transformer Decoder#Real-time Streaming2026년 1월 26일댓글 수 로딩 중
[논문리뷰] VisGym: Diverse, Customizable, Scalable Environments for Multimodal AgentsarXiv에 게시된 'VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Agents#Vision-Language Models (VLMs)#Interactive AI#Reinforcement Learning Environments#Benchmark#Decision-Making#Diagnostic Tools#Supervised Fine-tuning2026년 1월 25일댓글 수 로딩 중
[논문리뷰] TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-TransformersarXiv에 게시된 'TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-Transformers' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action (VLA)#Embodied AI#Robotics#Catastrophic Forgetting#Asymmetric Mixture-of-Transformers (AsyMoT)#Generalist VLM#Specialist VLM#Flow-Matching2026년 1월 25일댓글 수 로딩 중
[논문리뷰] SWE-Pruner: Self-Adaptive Context Pruning for Coding AgentsarXiv에 게시된 'SWE-Pruner: Self-Adaptive Context Pruning for Coding Agents' 논문에 대한 자세한 리뷰입니다.#Review#Context Pruning#Coding Agents#Large Language Models (LLMs)#Software Development#Code Comprehension#Efficiency Optimization#Task-Aware Pruning#CRF2026년 1월 25일댓글 수 로딩 중
[논문리뷰] SALAD: Achieve High-Sparsity Attention via Efficient Linear Attention Tuning for Video Diffusion TransformerarXiv에 게시된 'SALAD: Achieve High-Sparsity Attention via Efficient Linear Attention Tuning for Video Diffusion Transformer' 논문에 대한 자세한 리뷰입니다.#Review#Video Diffusion Models#Sparse Attention#Linear Attention#Computational Efficiency#Transformer Tuning#Video Generation#LoRA#Gating Mechanism2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Memory-V2V: Augmenting Video-to-Video Diffusion Models with MemoryarXiv에 게시된 'Memory-V2V: Augmenting Video-to-Video Diffusion Models with Memory' 논문에 대한 자세한 리뷰입니다.#Review#Video-to-Video Diffusion#Explicit Memory#Multi-turn Video Editing#Cross-consistency#Dynamic Tokenization#Adaptive Token Merging#Video Novel View Synthesis#Text-guided Video Editing2026년 1월 25일댓글 수 로딩 중
[논문리뷰] MeepleLM: A Virtual Playtester Simulating Diverse Subjective ExperiencesJianwen Sun이 arXiv에 게시한 'MeepleLM: A Virtual Playtester Simulating Diverse Subjective Experiences' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Board Games#Virtual Playtester#User Simulation#Persona Modeling#MDA Framework#Human-AI Collaboration#Critique Generation2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal DomainarXiv에 게시된 'Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal Domain' 논문에 대한 자세한 리뷰입니다.#Review#Turkish Legal NLP#Domain Adaptation#ModernBERT#Continual Pre-training (CPT)#Embedding Models#Legal LLMs#Retrieval-Augmented Generation (RAG)#Curriculum Learning2026년 1월 25일댓글 수 로딩 중
[논문리뷰] LongCat-Flash-Thinking-2601 Technical ReportarXiv에 게시된 'LongCat-Flash-Thinking-2601 Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Reinforcement Learning (RL)#Context Management#Scalable Training#Test-Time Reasoning#Open-Source Model2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Knowledge is Not Enough: Injecting RL Skills for Continual AdaptationarXiv에 게시된 'Knowledge is Not Enough: Injecting RL Skills for Continual Adaptation' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#Continual Adaptation#Reinforcement Learning#Supervised Fine-Tuning#Skill Transfer#Task Arithmetic#Tool Use2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Jet-RL: Enabling On-Policy FP8 Reinforcement Learning with Unified Training and Rollout Precision FlowarXiv에 게시된 'Jet-RL: Enabling On-Policy FP8 Reinforcement Learning with Unified Training and Rollout Precision Flow' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#FP8 Quantization#LLM Training#On-Policy RL#Unified Precision Flow#Training Efficiency#Rollout Acceleration2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Inference-Time Scaling of Verification: Self-Evolving Deep Research Agents via Test-Time Rubric-Guided VerificationarXiv에 게시된 'Inference-Time Scaling of Verification: Self-Evolving Deep Research Agents via Test-Time Rubric-Guided Verification' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research Agents#Inference-Time Verification#Self-Evolving LLM Agents#Rubric-Guided Feedback#Failure Taxonomy#Test-Time Scaling#Supervised Fine-tuning2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Guidelines to Prompt Large Language Models for Code Generation: An Empirical CharacterizationGabriele Bavota이 arXiv에 게시한 'Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Code Generation#Prompt Engineering#Prompt Optimization#Empirical Study#Software Engineering#Guidelines2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Endless Terminals: Scaling RL Environments for Terminal AgentsarXiv에 게시된 'Endless Terminals: Scaling RL Environments for Terminal Agents' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Procedural Generation#Terminal Agents#Environment Scaling#Language Models (LLMs)#PPO#Task Generation#Automated Verification2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of MindYi R Fung이 arXiv에 게시한 'Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind' 논문에 대한 자세한 리뷰입니다.#Review#Academic Rebuttal#Theory of Mind#Large Language Models#Strategic Persuasion#Reinforcement Learning#Self-Reward#Dataset Synthesis#Automated Evaluation2026년 1월 25일댓글 수 로딩 중
[논문리뷰] DSGym: A Holistic Framework for Evaluating and Training Data Science AgentsYongchan Kwon이 arXiv에 게시한 'DSGym: A Holistic Framework for Evaluating and Training Data Science Agents' 논문에 대한 자세한 리뷰입니다.#Review#Data Science Agents#LLM Evaluation#Benchmark Framework#Execution-Grounded Training#Bioinformatics#Kaggle#Shortcut Filtering#Synthetic Data2026년 1월 25일댓글 수 로딩 중
[논문리뷰] VideoMaMa: Mask-Guided Video Matting via Generative PriorarXiv에 게시된 'VideoMaMa: Mask-Guided Video Matting via Generative Prior' 논문에 대한 자세한 리뷰입니다.#Review#Video Matting#Diffusion Models#Generative Priors#Mask-Guided#Pseudo-labeling#Large-scale Dataset#Zero-shot Generalization2026년 1월 22일댓글 수 로딩 중
[논문리뷰] VIOLA: Towards Video In-Context Learning with Minimal AnnotationsRyo Hachiuma이 arXiv에 게시한 'VIOLA: Towards Video In-Context Learning with Minimal Annotations' 논문에 대한 자세한 리뷰입니다.#Review#Video In-Context Learning#Minimal Annotation#Active Learning#Pseudo-Labeling#Multimodal LLMs#Density-Uncertainty Sampling#Confidence-Aware Retrieval#Low-Resource Adaptation2026년 1월 22일댓글 수 로딩 중
[논문리뷰] Towards Automated Kernel Generation in the Era of LLMsYixin Shen이 arXiv에 게시한 'Towards Automated Kernel Generation in the Era of LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Kernel Generation#GPU Optimization#AI Agents#Code Synthesis#Performance Engineering#Hardware Acceleration2026년 1월 22일댓글 수 로딩 중
[논문리뷰] The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language ModelsarXiv에 게시된 'The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Language Models#Reasoning#Reinforcement Learning#Autoregressive Models#Generation Order#Entropy Degradation#Pass@k#GRPO2026년 1월 22일댓글 수 로딩 중