[논문리뷰] RbtAct: Rebuttal as Supervision for Actionable Review Feedback GenerationarXiv에 게시된 'RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation' 논문에 대한 자세한 리뷰입니다.#Review#Peer Review#Rebuttal#Actionable Feedback#Large Language Models (LLMs)#Supervised Fine-tuning (SFT)#Direct Preference Optimization (DPO)#RMR-75K Dataset#Review Feedback Generation2026년 3월 11일댓글 수 로딩 중
[논문리뷰] CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVRJiajun Song이 arXiv에 게시한 'CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Verifiable Rewards (RLVR)#Contrastive Learning (CL)#Policy Optimization#Large Language Models (LLMs)#Generalization#Robustness#Reasoning Tasks2026년 3월 11일댓글 수 로딩 중
[논문리뷰] Towards a Neural Debugger for PythonarXiv에 게시된 'Towards a Neural Debugger for Python' 논문에 대한 자세한 리뷰입니다.#Review#Neural Debuggers#Python Execution Traces#Large Language Models (LLMs)#Markov Decision Process (MDP)#Program Understanding#Code Generation#Inverse Execution#CruxEval2026년 3월 10일댓글 수 로딩 중
[논문리뷰] MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered AssistantsYuante Li이 arXiv에 게시한 'MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Code Generation#HTML#Interactive Applications#Benchmark#MINIAPPBENCH#Agentic Evaluation#MINIAPPEVAL#Real-World Principles#Human-AI Interaction2026년 3월 10일댓글 수 로딩 중
[논문리뷰] BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video GenerationarXiv에 게시된 'BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video Generation' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Video Generation#Multi-Agent System#Brand Integration#Prompt Engineering#Large Language Models (LLMs)#LoRA Fine-tuning#Contextual Adaptation2026년 3월 10일댓글 수 로딩 중
[논문리뷰] Lost in Stories: Consistency Bugs in Long Story Generation by LLMsHongzhi Li이 arXiv에 게시한 'Lost in Stories: Consistency Bugs in Long Story Generation by LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Story Generation#Narrative Consistency#Benchmark#Automated Evaluation#Error Analysis#Long-Form Text Generation#Consistency Error Density (CED)2026년 3월 9일댓글 수 로딩 중
[논문리뷰] Reasoning Models Struggle to Control their Chains of ThoughtarXiv에 게시된 'Reasoning Models Struggle to Control their Chains of Thought' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Model Controllability#AI Safety#Monitorability#Large Language Models (LLMs)#Reinforcement Learning (RL)#Evaluation Suite2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Progressive Residual Warmup for Language Model PretrainingYang Wang이 arXiv에 게시한 'Progressive Residual Warmup for Language Model Pretraining' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Transformer#Pretraining Stability#Residual Connections#Warmup Schedule#Layer-wise Learning#Optimization2026년 3월 8일댓글 수 로딩 중
[논문리뷰] HiMAP-Travel: Hierarchical Multi-Agent Planning for Long-Horizon Constrained TravelYong Liu이 arXiv에 게시한 'HiMAP-Travel: Hierarchical Multi-Agent Planning for Long-Horizon Constrained Travel' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent Planning#Hierarchical Reinforcement Learning#Constrained Optimization#Large Language Models (LLMs)#Travel Itinerary Generation#Constraint Drift#Parallel Execution#Resource Allocation2026년 3월 8일댓글 수 로딩 중
[논문리뷰] DeepPresenter: Environment-Grounded Reflection for Agentic Presentation GenerationarXiv에 게시된 'DeepPresenter: Environment-Grounded Reflection for Agentic Presentation Generation' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Systems#Presentation Generation#Large Language Models (LLMs)#Multimodal LLMs (MLLMs)#Environment-Grounded Reflection#Self-Correction#Dual-Agent Framework#Supervised Fine-tuning2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Qwen3-Coder-Next Technical ReportarXiv에 게시된 'Qwen3-Coder-Next Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Coding Agents#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Agentic Training#Software Engineering#Reinforcement Learning#Code Generation#Tool Usage2026년 3월 3일댓글 수 로딩 중
[논문리뷰] How Controllable Are Large Language Models? A Unified Evaluation across Behavioral GranularitiesarXiv에 게시된 'How Controllable Are Large Language Models? A Unified Evaluation across Behavioral Granularities' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Controllability#Hierarchical Benchmark#Behavioral Granularity#Model Steering#Prompt Engineering#Activation-based Steering2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Tool-R0: Self-Evolving LLM Agents for Tool-Learning from Zero DataarXiv에 게시된 'Tool-R0: Self-Evolving LLM Agents for Tool-Learning from Zero Data' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Self-Play Reinforcement Learning (RL)#Tool-Learning#Zero-Data Learning#LLM Agents#Curriculum Learning#Reward Shaping#Co-evolution2026년 3월 2일댓글 수 로딩 중
[논문리뷰] Legal RAG Bench: an end-to-end benchmark for legal RAGarXiv에 게시된 'Legal RAG Bench: an end-to-end benchmark for legal RAG' 논문에 대한 자세한 리뷰입니다.#Review#Retrieval-Augmented Generation (RAG)#Legal AI#Benchmark#Evaluation Methodology#Embedding Models#Large Language Models (LLMs)#Error Decomposition#Information Retrieval2026년 3월 2일댓글 수 로딩 중
[논문리뷰] CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel GenerationarXiv에 게시된 'CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel Generation' 논문에 대한 자세한 리뷰입니다.#Review#CUDA Kernel Generation#Agentic Reinforcement Learning#Large Language Models (LLMs)#GPU Optimization#Performance Tuning#Deep Learning Infrastructure#Program Synthesis2026년 3월 1일댓글 수 로딩 중
[논문리뷰] AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human GamesarXiv에 게시된 'AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human Games' 논문에 대한 자세한 리뷰입니다.#Review#Artificial General Intelligence (AGI)#Evaluation Benchmark#General Game Playing#Large Language Models (LLMs)#Human-in-the-loop#Cognitive Capabilities#Vision-Language Models (VLMs)#Game Generation2026년 2월 26일댓글 수 로딩 중
[논문리뷰] JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical EnvironmentsarXiv에 게시된 'JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical Environments' 논문에 대한 자세한 리뷰입니다.#Review#3D Audio-Visual Learning#Spatial Grounding#Spatial Reasoning#Large Language Models (LLMs)#Ambisonics#RGB-D#Simulated Environments#Neural Intensity Vector2026년 2월 25일댓글 수 로딩 중
[논문리뷰] SenTSR-Bench: Thinking with Injected Knowledge for Time-Series ReasoningHaotian Lin이 arXiv에 게시한 'SenTSR-Bench: Thinking with Injected Knowledge for Time-Series Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Time-Series Reasoning#Knowledge Injection#Large Language Models (LLMs)#Reinforcement Learning (RL)#Diagnostic AI#Multimodal AI#SenTSR-Bench2026년 2월 23일댓글 수 로딩 중
[논문리뷰] Does Socialization Emerge in AI Agent Society? A Case Study of MoltbookMing Li이 arXiv에 게시한 'Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook' 논문에 대한 자세한 리뷰입니다.#Review#AI Agent Societies#Socialization#Large Language Models (LLMs)#Collective Dynamics#Semantic Analysis#Network Analysis#Moltbook2026년 2월 17일댓글 수 로딩 중
[논문리뷰] InnoEval: On Research Idea Evaluation as a Knowledge-Grounded, Multi-Perspective Reasoning ProblemarXiv에 게시된 'InnoEval: On Research Idea Evaluation as a Knowledge-Grounded, Multi-Perspective Reasoning Problem' 논문에 대한 자세한 리뷰입니다.#Review#Research Idea Evaluation#Large Language Models (LLMs)#Knowledge Grounding#Multi-Perspective Reasoning#Agent-based Systems#Scientific Discovery#Peer Review Simulation#Automated Evaluation2026년 2월 16일댓글 수 로딩 중
[논문리뷰] A Critical Look at Targeted Instruction Selection: Disentangling What Matters (and What Doesn't)arXiv에 게시된 'A Critical Look at Targeted Instruction Selection: Disentangling What Matters (and What Doesn't)' 논문에 대한 자세한 리뷰입니다.#Review#Instruction Tuning#Data Selection#Large Language Models (LLMs)#Gradient-based Representations#Optimal Transport#Generalization Bounds#Data Representation2026년 2월 16일댓글 수 로딩 중
[논문리뷰] Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized ParadigmarXiv에 게시된 'Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized Paradigm' 논문에 대한 자세한 리뷰입니다.#Review#Decentralized Training#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Memory Efficiency#Sparse Expert Synchronization#Federated Learning#Distributed GPUs2026년 2월 12일댓글 수 로딩 중
[논문리뷰] Learning beyond Teacher: Generalized On-Policy Distillation with Reward ExtrapolationarXiv에 게시된 'Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation' 논문에 대한 자세한 리뷰입니다.#Review#On-Policy Distillation#Reward Extrapolation#Large Language Models (LLMs)#Knowledge Distillation#Reinforcement Learning#Math Reasoning#Code Generation#Multi-teacher Distillation2026년 2월 12일댓글 수 로딩 중
[논문리뷰] When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context ReasoningarXiv에 게시된 'When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Long-Context Reasoning#Large Language Models (LLMs)#Recurrent Memory#Gated Mechanisms#Reinforcement Learning#Memory Efficiency#Early Exit2026년 2월 11일댓글 수 로딩 중
[논문리뷰] QP-OneModel: A Unified Generative LLM for Multi-Task Query Understanding in Xiaohongshu SearchHui Zhang이 arXiv에 게시한 'QP-OneModel: A Unified Generative LLM for Multi-Task Query Understanding in Xiaohongshu Search' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Query Understanding#Multi-Task Learning#Generative AI#Reinforcement Learning (RL)#Social Network Services (SNS)#Xiaohongshu#Search Engines2026년 2월 11일댓글 수 로딩 중
[논문리뷰] Online Causal Kalman Filtering for Stable and Effective Policy OptimizationarXiv에 게시된 'Online Causal Kalman Filtering for Stable and Effective Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Optimization#Importance Sampling (IS) Ratio#Kalman Filter#Variance Reduction#Math Reasoning2026년 2월 11일댓글 수 로딩 중
[논문리뷰] G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic DesignLiang Zeng이 arXiv에 게시한 'G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic Design' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Automated Heuristic Design (AHD)#Large Neighborhood Search (LNS)#Combinatorial Optimization#Evolutionary Algorithm#Destroy Repair Operators#Co-evolution2026년 2월 11일댓글 수 로딩 중
[논문리뷰] CLI-Gym: Scalable CLI Task Generation via Agentic Environment InversionFeiyang Pan이 arXiv에 게시한 'CLI-Gym: Scalable CLI Task Generation via Agentic Environment Inversion' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Coding#CLI Automation#Environment Inversion#Task Generation#Large Language Models (LLMs)#Software Engineering#Dockerfile#Terminal-Bench2026년 2월 11일댓글 수 로딩 중
[논문리뷰] Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement LearningarXiv에 게시된 'Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Long Context Reasoning#Memory Compression#Reinforcement Learning#Large Language Models (LLMs)#Inference Efficiency#Dynamic Recall#KV-Cache#Multi-hop Reasoning2026년 2월 10일댓글 수 로딩 중
[논문리뷰] Chain of Mindset: Reasoning with Adaptive Cognitive ModesarXiv에 게시된 'Chain of Mindset: Reasoning with Adaptive Cognitive Modes' 논문에 대한 자세한 리뷰입니다.#Review#Adaptive Reasoning#Cognitive Modes#Large Language Models (LLMs)#Agentic AI#Multimodal Reasoning#Mindset Orchestration#Contextual Filtering#Training-free Framework2026년 2월 10일댓글 수 로딩 중
[논문리뷰] LatentChem: From Textual CoT to Latent Thinking in Chemical ReasoningJia Zhang이 arXiv에 게시한 'LatentChem: From Textual CoT to Latent Thinking in Chemical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Chemical Reasoning#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Latent Space#Molecular Optimization#Inference Efficiency#Reinforcement Learning#Chemical AI2026년 2월 9일댓글 수 로딩 중
[논문리뷰] On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language ModelsYanxi Chen이 arXiv에 게시한 'On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Fine-Tuning (RFT)#Large Language Models (LLMs)#Entropy Dynamics#Exploration-Exploitation#Policy Optimization#GRPO#Entropy Control#Discriminator Score2026년 2월 8일댓글 수 로딩 중
[논문리뷰] V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal RetrievalZeyu Zhang이 arXiv에 게시한 'V-Retrver: Evidence-Driven Agentic Reasoning for Universal Multimodal Retrieval' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Retrieval#Agentic AI#Large Language Models (LLMs)#Visual Tools#Chain-of-Thought (CoT)#Reinforcement Learning#Curriculum Learning#Evidence-Driven Reasoning2026년 2월 5일댓글 수 로딩 중
[논문리뷰] Multi-Task GRPO: Reliable LLM Reasoning Across TasksZhiyong Wang이 arXiv에 게시한 'Multi-Task GRPO: Reliable LLM Reasoning Across Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Multi-Task Learning#Reinforcement Learning#Policy Optimization#GRPO#Task Reweighting#Robustness#Reasoning Benchmarks2026년 2월 5일댓글 수 로딩 중
[논문리뷰] BatCoder: Self-Supervised Bidirectional Code-Documentation Learning via Back-TranslationXiaohua Wang이 arXiv에 게시한 'BatCoder: Self-Supervised Bidirectional Code-Documentation Learning via Back-Translation' 논문에 대한 자세한 리뷰입니다.#Review#Self-Supervised Learning#Code Generation#Documentation Generation#Back-Translation#Reinforcement Learning#Large Language Models (LLMs)#Code-Documentation Alignment#Low-Resource Languages2026년 2월 4일댓글 수 로딩 중
[논문리뷰] Latent Chain-of-Thought as Planning: Decoupling Reasoning from VerbalizationarXiv에 게시된 'Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization' 논문에 대한 자세한 리뷰입니다.#Review#Latent Reasoning#Chain-of-Thought (CoT)#Large Language Models (LLMs)#Planning#Reinforcement Learning#Mathematical Reasoning#Decoupling#Interpretability2026년 2월 1일댓글 수 로딩 중
[논문리뷰] Self-Improving Pretraining: using post-trained models to pretrain better modelsarXiv에 게시된 'Self-Improving Pretraining: using post-trained models to pretrain better models' 논문에 대한 자세한 리뷰입니다.#Review#Self-Improving Pretraining#Reinforcement Learning (RL)#Large Language Models (LLMs)#Quality Control#Factuality#Safety#Post-trained Models#Pretraining Data Augmentation2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Scaling Embeddings Outperforms Scaling Experts in Language ModelsarXiv에 게시된 'Scaling Embeddings Outperforms Scaling Experts in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Embedding Scaling#N-gram Embedding#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Parameter Efficiency#Inference Optimization#Speculative Decoding2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Exploring Reasoning Reward Model for AgentsZhixun Li이 arXiv에 게시한 'Exploring Reasoning Reward Model for Agents' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Reinforcement Learning#Reward Modeling#Reasoning-aware Feedback#Large Language Models (LLMs)#Multi-modal Agents#Fine-tuning#Critique Generation2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Beyond Imitation: Reinforcement Learning for Active Latent PlanningWee Sun Lee이 arXiv에 게시한 'Beyond Imitation: Reinforcement Learning for Active Latent Planning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Latent Reasoning#Reinforcement Learning (RL)#Variational Autoencoder (VAE)#Active Planning#Numerical Reasoning#Coherence Reward2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Reinforcement Learning via Self-DistillationarXiv에 게시된 'Reinforcement Learning via Self-Distillation' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Self-Distillation#Large Language Models (LLMs)#Rich Feedback#Credit Assignment#Policy Optimization#RLHF#Code Generation#Test-Time Training2026년 1월 28일댓글 수 로딩 중
[논문리뷰] Selective Steering: Norm-Preserving Control Through Discriminative Layer SelectionarXiv에 게시된 'Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection' 논문에 대한 자세한 리뷰입니다.#Review#Activation Steering#Large Language Models (LLMs)#Norm Preservation#Discriminative Layer Selection#Behavior Control#Inference-time Intervention#Angular Steering2026년 1월 27일댓글 수 로딩 중
[논문리뷰] HalluCitation Matters: Revealing the Impact of Hallucinated References with 300 Hallucinated Papers in ACL ConferencesTaro Watanabe이 arXiv에 게시한 'HalluCitation Matters: Revealing the Impact of Hallucinated References with 300 Hallucinated Papers in ACL Conferences' 논문에 대한 자세한 리뷰입니다.#Review#Hallucinated Citations#NLP Conferences#Citation Detection#Academic Integrity#Peer Review#Large Language Models (LLMs)#Bibliometrics2026년 1월 27일댓글 수 로딩 중
[논문리뷰] SWE-Pruner: Self-Adaptive Context Pruning for Coding AgentsarXiv에 게시된 'SWE-Pruner: Self-Adaptive Context Pruning for Coding Agents' 논문에 대한 자세한 리뷰입니다.#Review#Context Pruning#Coding Agents#Large Language Models (LLMs)#Software Development#Code Comprehension#Efficiency Optimization#Task-Aware Pruning#CRF2026년 1월 25일댓글 수 로딩 중
[논문리뷰] LongCat-Flash-Thinking-2601 Technical ReportarXiv에 게시된 'LongCat-Flash-Thinking-2601 Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Large Language Models (LLMs)#Mixture-of-Experts (MoE)#Reinforcement Learning (RL)#Context Management#Scalable Training#Test-Time Reasoning#Open-Source Model2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent ReasoningarXiv에 게시된 'Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Chain-of-Thought (CoT)#Large Language Models (LLMs)#Vision Language Models (VLMs)#Latent Reasoning#Visual Modality#Image Rendering#Computational Efficiency#Knowledge Distillation2026년 1월 21일댓글 수 로딩 중
[논문리뷰] Numina-Lean-Agent: An Open and General Agentic Reasoning System for Formal MathematicsarXiv에 게시된 'Numina-Lean-Agent: An Open and General Agentic Reasoning System for Formal Mathematics' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Systems#Formal Theorem Proving#Large Language Models (LLMs)#Lean Theorem Prover#Multi-Agent Systems#Code Generation#Automated Reasoning#Human-AI Collaboration2026년 1월 21일댓글 수 로딩 중
[논문리뷰] Facilitating Proactive and Reactive Guidance for Decision Making on the Web: A Design Probe with WebSeekArpit Narechania이 arXiv에 게시한 'Facilitating Proactive and Reactive Guidance for Decision Making on the Web: A Design Probe with WebSeek' 논문에 대한 자세한 리뷰입니다.#Review#Mixed-Initiative AI#Human-AI Collaboration#Web Data Analysis#Proactive Guidance#Large Language Models (LLMs)#Browser Extension#Data-Centric Design2026년 1월 21일댓글 수 로딩 중
[논문리뷰] YaPO: Learnable Sparse Activation Steering Vectors for Domain AdaptationarXiv에 게시된 'YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Activation Steering#Sparse Autoencoders (SAEs)#Domain Adaptation#Cultural Alignment#Preference Optimization#Disentangled Representations#Fine-grained Control2026년 1월 19일댓글 수 로딩 중
[논문리뷰] Reasoning Models Generate Societies of ThoughtJames Evans이 arXiv에 게시한 'Reasoning Models Generate Societies of Thought' 논문에 대한 자세한 리뷰입니다.#Review#Reasoning Models#Large Language Models (LLMs)#Multi-Agent Systems#Society of Thought#Mechanistic Interpretability#Reinforcement Learning#Cognitive Diversity#Conversational AI2026년 1월 18일댓글 수 로딩 중
[논문리뷰] Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMsarXiv에 게시된 'Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Exploration Collapse#Strategy-level Diversity#Uniqueness-Aware Rewarding#Creative Problem Solving#Pass@k2026년 1월 15일댓글 수 로딩 중
[논문리뷰] EvasionBench: Detecting Evasive Answers in Financial Q&A via Multi-Model Consensus and LLM-as-JudgeYi Yang이 arXiv에 게시한 'EvasionBench: Detecting Evasive Answers in Financial Q&A via Multi-Model Consensus and LLM-as-Judge' 논문에 대한 자세한 리뷰입니다.#Review#Evasion Detection#Financial NLP#Large Language Models (LLMs)#Multi-Model Consensus#LLM-as-Judge#Data Annotation#Knowledge Distillation#Hard Sample Mining2026년 1월 15일댓글 수 로딩 중
[논문리뷰] The AI Hippocampus: How Far are We From Human Memory?Tong Wu이 arXiv에 게시한 'The AI Hippocampus: How Far are We From Human Memory?' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Multi-Modal LLMs (MLLMs)#Memory Systems#Implicit Memory#Explicit Memory#Agentic Memory#Retrieval-Augmented Generation (RAG)#Contextual Understanding2026년 1월 14일댓글 수 로딩 중
[논문리뷰] Distribution-Aligned Sequence Distillation for Superior Long-CoT ReasoningarXiv에 게시된 'Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Knowledge Distillation#Sequence-level Distillation#Chain-of-Thought Reasoning (CoT)#Large Language Models (LLMs)#Temperature-scheduled Learning#Divergence-aware Sampling#Mixed-policy Distillation#Open-source Models2026년 1월 14일댓글 수 로딩 중
[논문리뷰] A^3-Bench: Benchmarking Memory-Driven Scientific Reasoning via Anchor and Attractor ActivationKai He이 arXiv에 게시한 'A^3-Bench: Benchmarking Memory-Driven Scientific Reasoning via Anchor and Attractor Activation' 논문에 대한 자세한 리뷰입니다.#Review#Scientific Reasoning#Memory-Driven AI#Benchmarking#Large Language Models (LLMs)#Anchor-Attractor Activation#Episodic Memory#Knowledge Retrieval2026년 1월 14일댓글 수 로딩 중
[논문리뷰] Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-CheckingZhen Ye이 arXiv에 게시한 'Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking' 논문에 대한 자세한 리뷰입니다.#Review#Fact-Checking#Large Language Models (LLMs)#Benchmarking#Multi-agent System#Stage-wise Evaluation#Claim Evolution#Trustworthy AI2026년 1월 13일댓글 수 로딩 중
[논문리뷰] ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior CalibrationarXiv에 게시된 'ET-Agent: Incentivizing Effective Tool-Integrated Reasoning Agent via Behavior Calibration' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool-Integrated Reasoning (TIR)#Agent Behavior Calibration#Reinforcement Learning (RL)#Self-Evolving Data Flywheel#Action Space Exploration#Behavioral Efficiency2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Dr. Zero: Self-Evolving Search Agents without Training DataShaoliang Nie이 arXiv에 게시한 'Dr. Zero: Self-Evolving Search Agents without Training Data' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolution#Search Agents#Large Language Models (LLMs)#Data-Free Learning#Reinforcement Learning (RL)#Hop-Grouped Relative Policy Optimization (HRPO)#Question Answering#Multi-hop Reasoning2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent InteractionZhengkang Guo이 arXiv에 게시한 'Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction' 논문에 대한 자세한 리뷰입니다.#Review#Long-Term Human-Agent Interaction#Controllable Memory#Memory Anchoring#Large Language Models (LLMs)#Personalization#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Memory Dependence2026년 1월 12일댓글 수 로딩 중
[논문리뷰] Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate ForgettingarXiv에 게시된 'Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Catastrophic Forgetting#Entropy-Adaptive Fine-Tuning (EAFT)#Large Language Models (LLMs)#Domain Adaptation#Reinforcement Learning (RL)#Confident Conflicts2026년 1월 7일댓글 수 로딩 중
[논문리뷰] X-MuTeST: A Multilingual Benchmark for Explainable Hate Speech Detection and A Novel LLM-consulted Explanation FrameworkShwetank Shekhar Singh이 arXiv에 게시한 'X-MuTeST: A Multilingual Benchmark for Explainable Hate Speech Detection and A Novel LLM-consulted Explanation Framework' 논문에 대한 자세한 리뷰입니다.#Review#Hate Speech Detection#Explainable AI (XAI)#Multilingual NLP#Large Language Models (LLMs)#Attention Mechanism#N-gram Explanations#Human Rationales#Benchmark Dataset2026년 1월 6일댓글 수 로딩 중
[논문리뷰] SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue ResolvingarXiv에 게시된 'SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue Resolving' 논문에 대한 자세한 리뷰입니다.#Review#Software Engineering#Issue Resolution#Supervised Fine-tuning (SFT)#Large Language Models (LLMs)#Hybrid Dataset#Error Masking#Curriculum Learning#Test-Time Scaling (TTS)#Generative Verifiers2026년 1월 5일댓글 수 로딩 중
[논문리뷰] AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous AgentsShixin Jiang이 arXiv에 게시한 'AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous Agents' 논문에 대한 자세한 리뷰입니다.#Review#Autonomous Agents#Memory Systems#Cognitive Neuroscience#Large Language Models (LLMs)#Retrieval-Augmented Generation (RAG)#Memory Management#Multimodal Memory#Agent Skills2025년 12월 31일댓글 수 로딩 중
[논문리뷰] VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active DialogsXihui Liu이 arXiv에 게시한 'VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Vision and Language Navigation#Instance Object Navigation#Active Dialog#Large Language Models (LLMs)#Benchmark#Human-Robot Interaction2025년 12월 29일댓글 수 로딩 중
[논문리뷰] Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary LossarXiv에 게시된 'Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Router-Expert Coupling#Auxiliary Loss#Expert Specialization#Large Language Models (LLMs)#Computational Efficiency2025년 12월 29일댓글 수 로딩 중
[논문리뷰] Streaming Video Instruction TuningKaiyang Zhou이 arXiv에 게시한 'Streaming Video Instruction Tuning' 논문에 대한 자세한 리뷰입니다.#Review#Streaming Video Understanding#Large Language Models (LLMs)#Instruction Tuning#Multi-task Learning#Real-time AI Assistant#Temporal Reasoning#Focal Loss#Video Question Answering2025년 12월 24일댓글 수 로딩 중
[논문리뷰] SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution ScenariosNghi D. Q. Bui이 arXiv에 게시한 'SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution Scenarios' 논문에 대한 자세한 리뷰입니다.#Review#Coding Agents#Software Evolution#Benchmarking#Long-Horizon Tasks#Large Language Models (LLMs)#Software Engineering#Code Generation2025년 12월 24일댓글 수 로딩 중
[논문리뷰] Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language PerspectivesSujata Ghosh이 arXiv에 게시한 'Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language Perspectives' 논문에 대한 자세한 리뷰입니다.#Review#Syllogistic Reasoning#Large Language Models (LLMs)#Belief Bias#Natural Language Understanding (NLU)#Formal Logic#Prompt Engineering#Self-Consistency#Cognitive Psychology2025년 12월 22일댓글 수 로딩 중
[논문리뷰] UCoder: Unsupervised Code Generation by Internal Probing of Large Language ModelsYuqing Ma이 arXiv에 게시한 'UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Unsupervised Learning#Code Generation#Large Language Models (LLMs)#Internal Probing#Self-Bootstrapping#Consensus Clustering#Code Intelligence2025년 12월 22일댓글 수 로딩 중
[논문리뷰] Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMsarXiv에 게시된 'Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs' 논문에 대한 자세한 리뷰입니다.#Review#Latent Variable Models#Variational Autoencoder (VAE)#Reinforcement Learning (RL)#Exploration#Large Language Models (LLMs)#Vision-Language Models (VLMs)#Controllable Generation#Reasoning Strategies2025년 12월 22일댓글 수 로딩 중
[논문리뷰] SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source RepositoriesarXiv에 게시된 'SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories' 논문에 대한 자세한 리뷰입니다.#Review#Software Engineering Benchmarks#Large Language Models (LLMs)#Code Generation#Automated Benchmark Generation#Multilingual#GitHub Pull Requests#Test Oracle#Fine-tuning2025년 12월 21일댓글 수 로딩 중
[논문리뷰] LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied OperatorarXiv에 게시된 'LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator' 논문에 대한 자세한 리뷰입니다.#Review#Robotic Agent#Large Language Models (LLMs)#Embodied AI#Task Planning#Human-Robot Interaction#General-purpose Robotics#ROS2025년 12월 14일댓글 수 로딩 중
[논문리뷰] Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement LearningarXiv에 게시된 'Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Parallel Reasoning#Self-Distilled Reinforcement Learning#Policy Optimization#Inference Acceleration#Structured Output#Agentic Reasoning2025년 12월 8일댓글 수 로딩 중
[논문리뷰] SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMsarXiv에 게시된 'SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Post-Training Quantization (PTQ)#Large Language Models (LLMs)#Low-Bit Quantization#Mixed-Precision Quantization#Sensitivity Metric#Quantization Scale Initialization#Accuracy Preservation2025년 12월 4일댓글 수 로딩 중
[논문리뷰] REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and SubstanceYaxin Fan이 arXiv에 게시한 'REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance' 논문에 대한 자세한 리뷰입니다.#Review#Fact-Checking#Explainable AI (XAI)#Large Language Models (LLMs)#Self-Refinement#Latent Space#Disentanglement#Steering Vectors#Misinformation2025년 12월 4일댓글 수 로딩 중
[논문리뷰] On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death SpiralChristos Thrampoulidis이 arXiv에 게시한 'On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death Spiral' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Tool-Integrated Reasoning (TIR)#GRPO#Training Stability#Lazy Likelihood Displacement (LLD)#Regularization#Search-R12025년 12월 4일댓글 수 로딩 중
[논문리뷰] Nex-N1: Agentic Models Trained via a Unified Ecosystem for Large-Scale Environment ConstructionarXiv에 게시된 'Nex-N1: Agentic Models Trained via a Unified Ecosystem for Large-Scale Environment Construction' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Models#Large Language Models (LLMs)#Agentic Scaling#Environment Construction#NexAU#NexA4A#NexGAP#Interactive Environments2025년 12월 4일댓글 수 로딩 중
[논문리뷰] Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded UpdatesNikolaos Aletras이 arXiv에 게시한 'Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Catastrophic Forgetting#Language Adaptation#Continual Pre-training#Parameter Freezing#Low-Resource Languages#Source Knowledge Preservation2025년 12월 4일댓글 수 로딩 중
[논문리뷰] Stabilizing Reinforcement Learning with LLMs: Formulation and PracticesarXiv에 게시된 'Stabilizing Reinforcement Learning with LLMs: Formulation and Practices' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Policy Gradient#REINFORCE#Mixture-of-Experts (MoE)#Training Stability#Importance Sampling#Routing Replay#Off-policy Learning2025년 12월 1일댓글 수 로딩 중
[논문리뷰] DeepSeekMath-V2: Towards Self-Verifiable Mathematical ReasoningarXiv에 게시된 'DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mathematical Reasoning#Large Language Models (LLMs)#Proof Verification#Self-Verification#Reinforcement Learning (RL)#Theorem Proving#Meta-Verification#Iterative Refinement2025년 11월 30일댓글 수 로딩 중
[논문리뷰] SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature SpaceYulan He이 arXiv에 게시한 'SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space' 논문에 대한 자세한 리뷰입니다.#Review#Sparse Attention#Full Attention#Large Language Models (LLMs)#Context Length#Attention Sparsity#Alignment Loss#Long-Context Extrapolation2025년 11월 25일댓글 수 로딩 중
[논문리뷰] General Agentic Memory Via Deep ResearcharXiv에 게시된 'General Agentic Memory Via Deep Research' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Memory Systems#Large Language Models (LLMs)#Just-in-Time (JIT) Compilation#Memorizer#Researcher#Reinforcement Learning#Context Management2025년 11월 24일댓글 수 로딩 중
[논문리뷰] OmniScientist: Toward a Co-evolving Ecosystem of Human and AI ScientistsWeiquan Lin이 arXiv에 게시한 'OmniScientist: Toward a Co-evolving Ecosystem of Human and AI Scientists' 논문에 대한 자세한 리뷰입니다.#Review#AI Scientist#Large Language Models (LLMs)#Human-AI Collaboration#Scientific Ecosystem#Research Automation#Omni Scientific Protocol (OSP)#ScienceArena#Knowledge Graph2025년 11월 23일댓글 수 로딩 중
[논문리뷰] Large Language Models Meet Extreme Multi-label Classification: Scaling and Multi-modal FrameworkarXiv에 게시된 'Large Language Models Meet Extreme Multi-label Classification: Scaling and Multi-modal Framework' 논문에 대한 자세한 리뷰입니다.#Review#Extreme Multi-label Classification (XMC)#Large Language Models (LLMs)#Multi-modal Learning#Dual-decoder Learning#Vision Transformers#Contrastive Learning#Prompt Engineering2025년 11월 18일댓글 수 로딩 중
[논문리뷰] Genomic Next-Token Predictors are In-Context LearnersarXiv에 게시된 'Genomic Next-Token Predictors are In-Context Learners' 논문에 대한 자세한 리뷰입니다.#Review#In-Context Learning (ICL)#Genomic Sequences#Next-Token Prediction#Large Language Models (LLMs)#Modality-Agnostic AI#Meta-Learning#Bitstring Program Synthesis#Evo22025년 11월 17일댓글 수 로딩 중
[논문리뷰] Black-Box On-Policy Distillation of Large Language ModelsarXiv에 게시된 'Black-Box On-Policy Distillation of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Knowledge Distillation (KD)#Black-box Distillation#Generative Adversarial Networks (GANs)#On-policy Learning#Reinforcement Learning#Minimax Game#Model Compression2025년 11월 13일댓글 수 로딩 중
[논문리뷰] MathSE: Improving Multimodal Mathematical Reasoning via Self-Evolving Iterative Reflection and Reward-Guided Fine-TuningarXiv에 게시된 'MathSE: Improving Multimodal Mathematical Reasoning via Self-Evolving Iterative Reflection and Reward-Guided Fine-Tuning' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning#Mathematical Problem Solving#Self-Evolving#Iterative Fine-Tuning#Reward Models#Reflection#Large Language Models (LLMs)2025년 11월 12일댓글 수 로딩 중
[논문리뷰] LoopTool: Closing the Data-Training Loop for Robust LLM Tool CallsarXiv에 게시된 'LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool Learning#Data Generation#Model Training#Closed-Loop Framework#Reinforcement Learning (RL)#Data Refinement#Self-Correction2025년 11월 12일댓글 수 로딩 중
[논문리뷰] Beyond Fact Retrieval: Episodic Memory for RAG with Generative Semantic WorkspacesVwani Roychowdhury이 arXiv에 게시한 'Beyond Fact Retrieval: Episodic Memory for RAG with Generative Semantic Workspaces' 논문에 대한 자세한 리뷰입니다.#Review#Retrieval-Augmented Generation (RAG)#Episodic Memory#Generative Semantic Workspaces (GSW)#Large Language Models (LLMs)#Question Answering (QA)#Semantic Modeling#Knowledge Graph2025년 11월 11일댓글 수 로딩 중
[논문리뷰] Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMsZiyue Li이 arXiv에 게시한 'Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Large Language Models (LLMs)#Router Optimization#Manifold Regularization#Generalization#Post-training Fine-tuning#Task Embedding Alignment2025년 11월 10일댓글 수 로딩 중
[논문리뷰] The Collaboration GaparXiv에 게시된 'The Collaboration Gap' 논문에 대한 자세한 리뷰입니다.#Review#AI Collaboration#Multi-Agent Systems#Large Language Models (LLMs)#Maze Solving#Heterogeneous Agents#Collaboration Gap#Relay Inference#Agentic AI2025년 11월 9일댓글 수 로딩 중
[논문리뷰] TabDSR: Decompose, Sanitize, and Reason for Complex Numerical Reasoning in Tabular DataJin Zeng이 arXiv에 게시한 'TabDSR: Decompose, Sanitize, and Reason for Complex Numerical Reasoning in Tabular Data' 논문에 대한 자세한 리뷰입니다.#Review#Tabular Data#Numerical Reasoning#Large Language Models (LLMs)#Table Question Answering (TQA)#Program-of-Thoughts (PoT)#Data Sanitization#Query Decomposition#Multi-hop Reasoning2025년 11월 9일댓글 수 로딩 중
[논문리뷰] BRAINS: A Retrieval-Augmented System for Alzheimer's Detection and MonitoringarXiv에 게시된 'BRAINS: A Retrieval-Augmented System for Alzheimer's Detection and Monitoring' 논문에 대한 자세한 리뷰입니다.#Review#Alzheimer's Disease#Retrieval-Augmented Generation (RAG)#Large Language Models (LLMs)#Clinical Decision Support#Multimodal Data Fusion#Cognitive Decline Detection#Early Diagnosis2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Towards Robust Mathematical ReasoningYuri Chervonyi이 arXiv에 게시한 'Towards Robust Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Mathematical Reasoning#Large Language Models (LLMs)#AI Benchmarks#International Mathematical Olympiad (IMO)#Proof Verification#Automatic Grading#Robustness2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Data-Efficient RLVR via Off-Policy Influence GuidanceJiale Cheng이 arXiv에 게시한 'Data-Efficient RLVR via Off-Policy Influence Guidance' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning with Verifiable Rewards (RLVR)#Influence Functions#Data Selection#Off-Policy Learning#Curriculum Learning#Large Language Models (LLMs)#Sparse Random Projection#Data Efficiency2025년 11월 9일댓글 수 로딩 중
[논문리뷰] MisSynth: Improving MISSCI Logical Fallacies Classification with Synthetic DataNadiya Shvai이 arXiv에 게시한 'MisSynth: Improving MISSCI Logical Fallacies Classification with Synthetic Data' 논문에 대한 자세한 리뷰입니다.#Review#Health Misinformation#Logical Fallacy Classification#Synthetic Data Generation#Large Language Models (LLMs)#Retrieval-Augmented Generation (RAG)#Parameter-Efficient Fine-tuning (PEFT)#LoRA#MISSCI Benchmark2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Limits of Generalization in RLVR: Two Case Studies in Mathematical ReasoningNidhi Rastogi이 arXiv에 게시한 'Limits of Generalization in RLVR: Two Case Studies in Mathematical Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning with Verifiable Rewards (RLVR)#Mathematical Reasoning#Large Language Models (LLMs)#Activity Scheduling#Longest Increasing Subsequence (LIS)#Generalization Limits#Reward Design#Self-consistency2025년 11월 9일댓글 수 로딩 중
[논문리뷰] INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization FormatsarXiv에 게시된 'INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats' 논문에 대한 자세한 리뷰입니다.#Review#Quantization#Low-bit Formats#Integer Quantization#Floating-Point Quantization#Large Language Models (LLMs)#Hardware Efficiency#Fine-Grained Quantization#MXINT82025년 11월 9일댓글 수 로딩 중
[논문리뷰] Continuous Autoregressive Language ModelsarXiv에 게시된 'Continuous Autoregressive Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Continuous Representation#Autoencoder#Likelihood-Free Modeling#Energy-Based Models#Next-Vector Prediction#Computational Efficiency#Temperature Sampling2025년 11월 9일댓글 수 로딩 중
[논문리뷰] The End of Manual Decoding: Towards Truly End-to-End Language ModelsarXiv에 게시된 'The End of Manual Decoding: Towards Truly End-to-End Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#End-to-End Generation#Dynamic Decoding#Hyperparameter Optimization#Stochastic Sampling#Instruction Following#Transformer Architecture2025년 10월 31일댓글 수 로딩 중
[논문리뷰] OmniLayout: Enabling Coarse-to-Fine Learning with LLMs for Universal Document Layout GenerationBin Wang이 arXiv에 게시한 'OmniLayout: Enabling Coarse-to-Fine Learning with LLMs for Universal Document Layout Generation' 논문에 대한 자세한 리뷰입니다.#Review#Document Layout Generation#Large Language Models (LLMs)#Coarse-to-Fine Learning#Dataset Curation#OmniLayout-1M#Document AI#Generative Models2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Magentic Marketplace: An Open-Source Environment for Studying Agentic MarketsarXiv에 게시된 'Magentic Marketplace: An Open-Source Environment for Studying Agentic Markets' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Markets#Multi-Agent Systems#Large Language Models (LLMs)#Simulation Environment#Open-Source Platform#Market Mechanism Design#Behavioral Biases#Manipulation Resistance2025년 10월 31일댓글 수 로딩 중
[논문리뷰] Evolving Diagnostic Agents in a Virtual Clinical EnvironmentarXiv에 게시된 'Evolving Diagnostic Agents in a Virtual Clinical Environment' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Diagnostic Agents#Reinforcement Learning (RL)#Virtual Clinical Environment#Medical AI#Multi-turn Diagnosis#EHR (Electronic Health Records)2025년 10월 30일댓글 수 로딩 중
[논문리뷰] ChronoPlay: A Framework for Modeling Dual Dynamics and Authenticity in Game RAG BenchmarksarXiv에 게시된 'ChronoPlay: A Framework for Modeling Dual Dynamics and Authenticity in Game RAG Benchmarks' 논문에 대한 자세한 리뷰입니다.#Review#Retrieval Augmented Generation (RAG)#Dynamic Benchmarks#Game AI#User Interest Drift#Knowledge Evolution#Automated Benchmark Generation#Authenticity#Large Language Models (LLMs)2025년 10월 30일댓글 수 로딩 중
[논문리뷰] BhashaBench V1: A Comprehensive Benchmark for the Quadrant of Indic DomainsarXiv에 게시된 'BhashaBench V1: A Comprehensive Benchmark for the Quadrant of Indic Domains' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Benchmark#Indic Languages#Multilingual Evaluation#Domain-Specific AI#India-centric Knowledge Systems#Zero-Shot Learning#Question Answering2025년 10월 30일댓글 수 로딩 중
[논문리뷰] Generalization or Memorization: Dynamic Decoding for Mode SteeringarXiv에 게시된 'Generalization or Memorization: Dynamic Decoding for Mode Steering' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Generalization#Memorization#Information Bottleneck (IB)#Activation Steering#Decoding Strategy#Causal Intervention#LLM Reliability2025년 10월 29일댓글 수 로딩 중
[논문리뷰] FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function CallingarXiv에 게시된 'FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling' 논문에 대한 자세한 리뷰입니다.#Review#Function Calling#Multi-Turn Interaction#Large Language Models (LLMs)#Data Synthesis#Agentic AI#Tool Use#Chain-of-Thought (CoT)#Reinforcement Learning2025년 10월 29일댓글 수 로딩 중
[논문리뷰] Sparser Block-Sparse Attention via Token PermutationarXiv에 게시된 'Sparser Block-Sparse Attention via Token Permutation' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Self-Attention#Block-Sparse Attention#Token Permutation#Computational Efficiency#Prefilling#Long Context#Causal Attention2025년 10월 27일댓글 수 로딩 중
[논문리뷰] ComProScanner: A multi-agent based framework for composition-property structured data extraction from scientific literaturearXiv에 게시된 'ComProScanner: A multi-agent based framework for composition-property structured data extraction from scientific literature' 논문에 대한 자세한 리뷰입니다.#Review#Multi-agent Systems#Large Language Models (LLMs)#Information Extraction#Scientific Literature#Materials Science#Data Curation#Piezoelectric Materials#RAG (Retrieval-Augmented Generation)2025년 10월 24일댓글 수 로딩 중
[논문리뷰] Learning from the Best, Differently: A Diversity-Driven Rethinking on Data SelectionYi Cheng이 arXiv에 게시한 'Learning from the Best, Differently: A Diversity-Driven Rethinking on Data Selection' 논문에 대한 자세한 리뷰입니다.#Review#Data Selection#Large Language Models (LLMs)#Data Diversity#Data Quality#Principal Component Analysis (PCA)#Orthogonal Dimensions#Pre-training2025년 10월 23일댓글 수 로딩 중
[논문리뷰] AlphaOPT: Formulating Optimization Programs with Self-Improving LLM Experience LibraryChonghe Jiang이 arXiv에 게시한 'AlphaOPT: Formulating Optimization Programs with Self-Improving LLM Experience Library' 논문에 대한 자세한 리뷰입니다.#Review#Optimization Modeling#Large Language Models (LLMs)#Experience Library#Self-Improving Systems#Continual Learning#Out-of-Distribution Generalization#Operations Research#Knowledge Representation2025년 10월 23일댓글 수 로딩 중
[논문리뷰] Executable Knowledge Graphs for Replicating AI ResearcharXiv에 게시된 'Executable Knowledge Graphs for Replicating AI Research' 논문에 대한 자세한 리뷰입니다.#Review#AI Research Replication#Large Language Models (LLMs)#Knowledge Graphs (KGs)#Executable Code Generation#Retrieval-Augmented Generation (RAG)#PaperBench#Automated AI Research2025년 10월 21일댓글 수 로딩 중
[논문리뷰] Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert modelsShiwei Liu이 arXiv에 게시한 'Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models' 논문에 대한 자세한 리뷰입니다.#Review#Mixture-of-Experts (MoE)#Online Adaptation#Test-Time Adaptation (TTA)#Expert Routing#Large Language Models (LLMs)#Self-Supervision#Computational Efficiency#Context Shift Robustness2025년 10월 20일댓글 수 로딩 중
[논문리뷰] ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language ModelsSean O'Brien이 arXiv에 게시한 'ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Conversation#Large Language Models (LLMs)#Context Management#Entropy-guided Resetting#Uncertainty Quantification#Performance Degradation#Prompt Engineering#Conversational AI2025년 10월 20일댓글 수 로딩 중
[논문리뷰] MoM: Mixtures of Scenario-Aware Document Memories for Retrieval-Augmented Generation SystemsFeiyu Xiong이 arXiv에 게시한 'MoM: Mixtures of Scenario-Aware Document Memories for Retrieval-Augmented Generation Systems' 논문에 대한 자세한 리뷰입니다.#Review#Retrieval-Augmented Generation (RAG)#Document Memory#Text Chunking#Small Language Models (SLMs)#Large Language Models (LLMs)#Scenario-Aware Processing#Multi-Layer Retrieval#Cognitive Simulation2025년 10월 17일댓글 수 로딩 중
[논문리뷰] Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMsHao Zhang이 arXiv에 게시한 'Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reinforcement Learning (RL)#Multi-Agent Systems (MAS)#On-Policy RL#Collaborative AI#Agentic LLMs#Group-based Optimization2025년 10월 16일댓글 수 로딩 중
[논문리뷰] Reasoning in Space via Grounding in the WorldLi Zhang이 arXiv에 게시한 'Reasoning in Space via Grounding in the World' 논문에 대한 자세한 리뷰입니다.#Review#3D Visual Grounding#Spatial Reasoning#Large Language Models (LLMs)#Chain-of-Thought (CoT)#Hybrid Representation#Multi-modal LLMs#Point Clouds2025년 10월 16일댓글 수 로딩 중
[논문리뷰] MATH-Beyond: A Benchmark for RL to Expand Beyond the Base ModelWieland Brendel이 arXiv에 게시한 'MATH-Beyond: A Benchmark for RL to Expand Beyond the Base Model' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Mathematical Reasoning#Benchmark#Large Language Models (LLMs)#Exploration#Boundary Expansion#MATH-Beyond2025년 10월 16일댓글 수 로딩 중
[논문리뷰] SAIL-Embedding Technical Report: Omni-modal Embedding Foundation ModelarXiv에 게시된 'SAIL-Embedding Technical Report: Omni-modal Embedding Foundation Model' 논문에 대한 자세한 리뷰입니다.#Review#Omni-modal Embedding#Multimodal Learning#Recommendation Systems#Hard Negative Mining#Contrastive Learning#Large Language Models (LLMs)#Data Balancing#Multitask Learning2025년 10월 15일댓글 수 로딩 중
[논문리뷰] LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking TokensarXiv에 게시된 'LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Machine Translation (MT)#Chain-of-Thought (CoT)#Knowledge Distillation#Fine-tuning#Prompt Engineering#Synthetic Data2025년 10월 15일댓글 수 로딩 중
[논문리뷰] DITING: A Multi-Agent Evaluation Framework for Benchmarking Web Novel TranslationarXiv에 게시된 'DITING: A Multi-Agent Evaluation Framework for Benchmarking Web Novel Translation' 논문에 대한 자세한 리뷰입니다.#Review#Machine Translation Evaluation#Large Language Models (LLMs)#Web Novel Translation#Multi-Agent Systems#Cultural Nuance#Benchmark Dataset#Natural Language Generation2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Which Heads Matter for Reasoning? RL-Guided KV Cache CompressionHuan Wang이 arXiv에 게시한 'Which Heads Matter for Reasoning? RL-Guided KV Cache Compression' 논문에 대한 자세한 리뷰입니다.#Review#KV Cache Compression#Large Language Models (LLMs)#Reinforcement Learning (RL)#Reasoning Models#Attention Heads#Chain-of-Thought (CoT)#Memory Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining LevelsarXiv에 게시된 'Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Data Pipeline#Web-scale Data#Question-Answering (QA)#Data Generation#Data Diversity#Data Efficiency2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Bridging Reasoning to Learning: Unmasking Illusions using Complexity Out of Distribution GeneralizationMahdi Ghaznavai이 arXiv에 게시한 'Bridging Reasoning to Learning: Unmasking Illusions using Complexity Out of Distribution Generalization' 논문에 대한 자세한 리뷰입니다.#Review#Complexity OoD Generalization#System-1 Thinking#System-2 Reasoning#Kolmogorov Complexity#Inductive Biases#Large Language Models (LLMs)#Reasoning Evaluation2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be DensearXiv에 게시된 'Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Reward Modeling#Large Language Models (LLMs)#Mathematical Reasoning#Sparse Rewards#Dense Rewards#Hybrid Reinforcement#Verifier-based Rewards2025년 10월 10일댓글 수 로딩 중
[논문리뷰] First Try Matters: Revisiting the Role of Reflection in Reasoning ModelsWee Sun Lee이 arXiv에 게시한 'First Try Matters: Revisiting the Role of Reflection in Reasoning Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reasoning#Chain-of-Thought (CoT)#Reflection#Early Stopping#Supervised Fine-tuning (SFT)#Token Efficiency#Mathematical Reasoning2025년 10월 10일댓글 수 로딩 중
[논문리뷰] Native Hybrid Attention for Efficient Sequence ModelingYu Cheng이 arXiv에 게시한 'Native Hybrid Attention for Efficient Sequence Modeling' 논문에 대한 자세한 리뷰입니다.#Review#Sequence Modeling#Hybrid Attention#Transformer Architecture#Linear Attention#Sliding Window Attention#Long Context#Large Language Models (LLMs)#Efficiency2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Multi-Agent Tool-Integrated Policy OptimizationLidong Bing이 arXiv에 게시한 'Multi-Agent Tool-Integrated Policy Optimization' 논문에 대한 자세한 리뷰입니다.#Review#Multi-Agent RL#Tool-Integrated Planning#Large Language Models (LLMs)#Policy Optimization#Credit Assignment#Reinforcement Learning#MATPO2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Cache-to-Cache: Direct Semantic Communication Between Large Language ModelsarXiv에 게시된 'Cache-to-Cache: Direct Semantic Communication Between Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Inter-model Communication#KV-Cache#Semantic Transfer#Multi-LLM Systems#Cache Fusion#Latency Reduction#Knowledge Sharing2025년 10월 9일댓글 수 로딩 중
[논문리뷰] In-the-Flow Agentic System Optimization for Effective Planning and Tool UsearXiv에 게시된 'In-the-Flow Agentic System Optimization for Effective Planning and Tool Use' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Systems#Large Language Models (LLMs)#Tool Use#Reinforcement Learning (RL)#On-policy Optimization#Flow-based Group Refined Policy Optimization (Flow-GRPO)#Multi-turn Reasoning2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM TrainingarXiv에 게시된 'Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Adaptive Sampling#Policy Gradient#Reward Optimization#Signal Collapse#Variance Reduction2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Scaling Policy Compliance Assessment in Language Models with Policy Reasoning TracesarXiv에 게시된 'Scaling Policy Compliance Assessment in Language Models with Policy Reasoning Traces' 논문에 대한 자세한 리뷰입니다.#Review#Policy Compliance#Large Language Models (LLMs)#Reasoning Traces#In-Context Learning (ICL)#Supervised Finetuning (SFT)#HIPAA#GDPR#ModelSpec2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Knapsack RL: Unlocking Exploration of LLMs via Optimizing Budget AllocationarXiv에 게시된 'Knapsack RL: Unlocking Exploration of LLMs via Optimizing Budget Allocation' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Reinforcement Learning (RL)#Exploration Budget Allocation#Knapsack Problem#Group Relative Policy Optimization (GRPO)#Mathematical Reasoning#Resource Optimization2025년 10월 2일댓글 수 로딩 중
[논문리뷰] DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree SearcharXiv에 게시된 'DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree Search' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning with Verifiable Rewards (RLVR)#Monte Carlo Tree Search (MCTS)#Mathematical Reasoning#Large Language Models (LLMs)#Systematic Exploration#Adaptive Training#Tree-GRPO2025년 10월 2일댓글 수 로딩 중
[논문리뷰] Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability ContinuumHanghang Tong이 arXiv에 게시한 'Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-tuning (SFT)#Large Language Models (LLMs)#Training Objectives#Negative Log Likelihood (NLL)#Model Capability Continuum#Generalization#Probability-based Loss Functions2025년 10월 2일댓글 수 로딩 중
[논문리뷰] d^2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive CachingJiarui Wang이 arXiv에 게시한 'd^2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Models#Large Language Models (LLMs)#Inference Acceleration#KV Cache#Bidirectional Attention#Adaptive Caching#Token Selection2025년 10월 1일댓글 수 로딩 중
[논문리뷰] OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!arXiv에 게시된 'OffTopicEval: When Large Language Models Enter the Wrong Chat, Almost Always!' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Operational Safety#Out-of-Domain (OOD)#Prompt Steering#Jailbreak Attacks#Evaluation Benchmark#Refusal Rate2025년 10월 1일댓글 수 로딩 중
[논문리뷰] ReviewScore: Misinformed Peer Review Detection with Large Language ModelsarXiv에 게시된 'ReviewScore: Misinformed Peer Review Detection with Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Peer Review#Review Quality#Large Language Models (LLMs)#Misinformed Review#Argument Reconstruction#Factuality Evaluation#Natural Language Processing#Automated Evaluation2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Thinking While Listening: Simple Test Time Scaling For Audio ClassificationMert Pilanci이 arXiv에 게시한 'Thinking While Listening: Simple Test Time Scaling For Audio Classification' 논문에 대한 자세한 리뷰입니다.#Review#Audio Classification#Test-Time Scaling#Reasoning Traces#Large Language Models (LLMs)#Transformer Architectures#Zero-shot Reasoning#Computational Efficiency2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Thinking Augmented Pre-trainingFuru Wei이 arXiv에 게시한 'Thinking Augmented Pre-training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Pre-training#Data Augmentation#Reasoning#Data Efficiency#Thinking Trajectories2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Analyzing the Effects of Supervised Fine-Tuning on Model Knowledge from Token and Parameter LevelsQi Zhang이 arXiv에 게시한 'Analyzing the Effects of Supervised Fine-Tuning on Model Knowledge from Token and Parameter Levels' 논문에 대한 자세한 리뷰입니다.#Review#Supervised Fine-Tuning (SFT)#Large Language Models (LLMs)#Model Knowledge#Closed-Book Question Answering (CBQA)#Parameter Restoration#Kullback-Leibler Divergence#Knowledge Forgetting2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Video2Roleplay: A Multimodal Dataset and Framework for Video-Guided Role-playing AgentsChao Zhang이 arXiv에 게시한 'Video2Roleplay: A Multimodal Dataset and Framework for Video-Guided Role-playing Agents' 논문에 대한 자세한 리뷰입니다.#Review#Role-playing Agents (RPAs)#Multimodal AI#Video Understanding#Large Language Models (LLMs)#Dataset Creation#Dynamic Role Profiles#Adaptive Temporal Sampling#Fine-tuning2025년 9월 22일댓글 수 로딩 중
[논문리뷰] MARS2 2025 Challenge on Multimodal Reasoning: Datasets, Methods, Results, Discussion, and OutlookBowen Zhou이 arXiv에 게시한 'MARS2 2025 Challenge on Multimodal Reasoning: Datasets, Methods, Results, Discussion, and Outlook' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Reasoning#Large Language Models (LLMs)#Multimodal Large Language Models (MLLMs)#Visual Grounding#Visual Question Answering#Advertisement Video Analysis#Real-world Scenarios#Challenge Benchmark2025년 9월 18일댓글 수 로딩 중
[논문리뷰] Improving Context Fidelity via Native Retrieval-Augmented ReasoningXiangru Tang이 arXiv에 게시한 'Improving Context Fidelity via Native Retrieval-Augmented Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Context Fidelity#Retrieval-Augmented Generation (RAG)#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Hallucination#Question Answering#In-context Retrieval#Curriculum Learning2025년 9월 18일댓글 수 로딩 중
[논문리뷰] The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable RewardXiaoyu Tan이 arXiv에 게시한 'The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models (LLMs)#Diversity Collapse#f-divergence#Forward-KL#JS-divergence#Pass@k#Catastrophic Forgetting2025년 9월 12일댓글 수 로딩 중
[논문리뷰] WebExplorer: Explore and Evolve for Training Long-Horizon Web AgentsAili Chen이 arXiv에 게시한 'WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents' 논문에 대한 자세한 리뷰입니다.#Review#Web Agents#Long-Horizon Reasoning#Large Language Models (LLMs)#Data Generation#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Web Navigation#Information Retrieval2025년 9월 9일댓글 수 로딩 중
[논문리뷰] Bootstrapping Task Spaces for Self-ImprovementYoram Bachrach이 arXiv에 게시한 'Bootstrapping Task Spaces for Self-Improvement' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning (RL)#Large Language Models (LLMs)#Self-Improvement#Autocurriculum#Task-Space Exploration#Inference-Time Iteration#Policy Optimization2025년 9월 8일댓글 수 로딩 중
[논문리뷰] Towards a Unified View of Large Language Model Post-TrainingHongyi Liu이 arXiv에 게시한 'Towards a Unified View of Large Language Model Post-Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Post-Training#Reinforcement Learning (RL)#Supervised Fine-Tuning (SFT)#Policy Gradient#Unified Framework#Hybrid Algorithms#Bias-Variance Tradeoff2025년 9월 5일댓글 수 로딩 중
[논문리뷰] NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware EmbeddingsOren Glickman이 arXiv에 게시한 'NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware Embeddings' 논문에 대한 자세한 리뷰입니다.#Review#Named Entity Retrieval#Zero-Shot Learning#Type-Aware Embeddings#Large Language Models (LLMs)#Contrastive Learning#Internal Representations#Information Retrieval2025년 9월 5일댓글 수 로딩 중
[논문리뷰] Attributes as Textual Genes: Leveraging LLMs as Genetic Algorithm Simulators for Conditional Synthetic Data GenerationXiaolei Huang이 arXiv에 게시한 'Attributes as Textual Genes: Leveraging LLMs as Genetic Algorithm Simulators for Conditional Synthetic Data Generation' 논문에 대한 자세한 리뷰입니다.#Review#Synthetic Data Generation#Large Language Models (LLMs)#Genetic Algorithms#Textual Data Augmentation#Active Learning#NLP#Data Diversity2025년 9월 3일댓글 수 로딩 중
[논문리뷰] T2R-bench: A Benchmark for Generating Article-Level Reports from Real World Industrial TablesYu Zhao이 arXiv에 게시한 'T2R-bench: A Benchmark for Generating Article-Level Reports from Real World Industrial Tables' 논문에 대한 자세한 리뷰입니다.#Review#Table-to-Report Generation#Large Language Models (LLMs)#Benchmark Dataset#Industrial Applications#Table Reasoning#Evaluation Metrics#Real-world Data2025년 9월 2일댓글 수 로딩 중
[논문리뷰] PVPO: Pre-Estimated Value-Based Policy Optimization for Agentic ReasoningYuewei Zhang이 arXiv에 게시한 'PVPO: Pre-Estimated Value-Based Policy Optimization for Agentic Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Critic-Free RL#Agentic Reasoning#Policy Optimization#Large Language Models (LLMs)#Advantage Estimation#Group Sampling#Static Value Estimation2025년 9월 2일댓글 수 로딩 중
[논문리뷰] Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PDRoy Ka-Wei Lee이 arXiv에 게시한 'Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PD' 논문에 대한 자세한 리뷰입니다.#Review#Persuasion Dynamics#Large Language Models (LLMs)#Robustness#Gullibility#Receptiveness#Direct Preference Optimization (DPO)#Safety Alignment#Multi-turn Dialogue2025년 8월 29일댓글 수 로딩 중
[논문리뷰] OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language ModelsAlex Endert이 arXiv에 게시한 'OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Human-Computer Interaction (HCI)#Conversational AI#Goal Tracking#Visualization#Multi-Turn Dialogue#User Interface Design#Sensemaking2025년 8월 29일댓글 수 로딩 중
[논문리뷰] Spacer: Towards Engineered Scientific Inspirationzerojun48이 arXiv에 게시한 'Spacer: Towards Engineered Scientific Inspiration' 논문에 대한 자세한 리뷰입니다.#Review#Scientific Discovery#Large Language Models (LLMs)#Decontextualization#Keyword Graph#Multi-Agent System#Scientific Ideation#Research Automation#Inspiration Engine2025년 8월 27일댓글 수 로딩 중
[논문리뷰] LiveMCP-101: Stress Testing and Diagnosing MCP-enabled Agents on Challenging Querieshuuuyeah이 arXiv에 게시한 'LiveMCP-101: Stress Testing and Diagnosing MCP-enabled Agents on Challenging Queries' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Tool Use#Model Context Protocol (MCP)#Benchmarking#Large Language Models (LLMs)#Real-world Tasks#Evaluation#Error Analysis2025년 8월 22일댓글 수 로딩 중
[논문리뷰] Leveraging Large Language Models for Predictive Analysis of Human MiseryAbhilash Nandy이 arXiv에 게시한 'Leveraging Large Language Models for Predictive Analysis of Human Misery' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Affective Computing#Misery Score Prediction#Prompt Engineering#Few-shot Learning#Gamified Evaluation#Feedback-driven Adaptation2025년 8월 20일댓글 수 로딩 중
[논문리뷰] TopXGen: Topic-Diverse Parallel Data Generation for Low-Resource Machine TranslationRachel Bawden이 arXiv에 게시한 'TopXGen: Topic-Diverse Parallel Data Generation for Low-Resource Machine Translation' 논문에 대한 자세한 리뷰입니다.#Review#Low-Resource MT#Data Augmentation#Large Language Models (LLMs)#Back-Translation#In-Context Learning (ICL)#Fine-Tuning#Topic-Guided Generation#Parallel Data Synthesis2025년 8월 13일댓글 수 로딩 중
[논문리뷰] GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples ReplayYang Fan이 arXiv에 게시한 'GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay' 논문에 대한 자세한 리뷰입니다.#Review#Continual Learning#Large Language Models (LLMs)#Catastrophic Forgetting#Replay#Knowledge Distillation#Activation States#Anti-forgetting#Threshold-based Margin Loss2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build EnvironmentsXuesong Yao이 arXiv에 게시한 'Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Tool Use#Reinforcement Learning (RL)#Automated Environment Generation#Feedback-Driven Training#Reward Mechanism#Contextual Understanding2025년 8월 13일댓글 수 로딩 중
[논문리뷰] Tool-integrated Reinforcement Learning for Repo Deep SearchYanzhen Zou이 arXiv에 게시한 'Tool-integrated Reinforcement Learning for Repo Deep Search' 논문에 대한 자세한 리뷰입니다.#Review#Issue Localization#Large Language Models (LLMs)#Reinforcement Learning (RL)#Supervised Fine-tuning (SFT)#Tool-integrated Agents#Software Engineering#Code Search2025년 8월 6일댓글 수 로딩 중
[논문리뷰] RecGPT Technical ReportJian Wu이 arXiv에 게시한 'RecGPT Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Recommender Systems#Large Language Models (LLMs)#User Intent Modeling#Multi-Stage Training#Human-in-the-Loop#E-commerce#Filter Bubble Mitigation#Matthew Effect2025년 8월 2일댓글 수 로딩 중
[논문리뷰] Persona Vectors: Monitoring and Controlling Character Traits in Language ModelsJack Lindsey이 arXiv에 게시한 'Persona Vectors: Monitoring and Controlling Character Traits in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Persona Control#Activation Steering#Finetuning#Behavioral Shift Detection#Interpretability#Data Filtering2025년 8월 2일댓글 수 로딩 중