[논문리뷰] Lost in Stories: Consistency Bugs in Long Story Generation by LLMsHongzhi Li이 arXiv에 게시한 'Lost in Stories: Consistency Bugs in Long Story Generation by LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Story Generation#Narrative Consistency#Benchmark#Automated Evaluation#Error Analysis#Long-Form Text Generation#Consistency Error Density (CED)2026년 3월 9일댓글 수 로딩 중
[논문리뷰] InnoEval: On Research Idea Evaluation as a Knowledge-Grounded, Multi-Perspective Reasoning ProblemarXiv에 게시된 'InnoEval: On Research Idea Evaluation as a Knowledge-Grounded, Multi-Perspective Reasoning Problem' 논문에 대한 자세한 리뷰입니다.#Review#Research Idea Evaluation#Large Language Models (LLMs)#Knowledge Grounding#Multi-Perspective Reasoning#Agent-based Systems#Scientific Discovery#Peer Review Simulation#Automated Evaluation2026년 2월 16일댓글 수 로딩 중
[논문리뷰] Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of MindYi R Fung이 arXiv에 게시한 'Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind' 논문에 대한 자세한 리뷰입니다.#Review#Academic Rebuttal#Theory of Mind#Large Language Models#Strategic Persuasion#Reinforcement Learning#Self-Reward#Dataset Synthesis#Automated Evaluation2026년 1월 25일댓글 수 로딩 중
[논문리뷰] DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic EvaluationarXiv에 게시된 'DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation' 논문에 대한 자세한 리뷰입니다.#Review#Agentic AI#Deep Research Systems#Automated Evaluation#Task Construction#Fact-Checking#LLM Benchmarking#Adaptive Evaluation2026년 1월 14일댓글 수 로딩 중
[논문리뷰] ReviewScore: Misinformed Peer Review Detection with Large Language ModelsarXiv에 게시된 'ReviewScore: Misinformed Peer Review Detection with Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Peer Review#Review Quality#Large Language Models (LLMs)#Misinformed Review#Argument Reconstruction#Factuality Evaluation#Natural Language Processing#Automated Evaluation2025년 9월 29일댓글 수 로딩 중
[논문리뷰] FlashAdventure: A Benchmark for GUI Agents Solving Full Story Arcs in Diverse Adventure GamesDongmin Park이 arXiv에 게시한 'FlashAdventure: A Benchmark for GUI Agents Solving Full Story Arcs in Diverse Adventure Games' 논문에 대한 자세한 리뷰입니다.#Review#GUI Agents#Adventure Games#Benchmark#Full Story Arc#Observation-Behavior Gap#LLMs#Automated Evaluation2025년 9월 3일댓글 수 로딩 중
[논문리뷰] DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research SynthesisIon Stoica이 arXiv에 게시한 'DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis' 논문에 대한 자세한 리뷰입니다.#Review#Generative Research Synthesis#Live Benchmark#Automated Evaluation#LLM-as-a-judge#Related Work Generation#Retrieval-Augmented Generation#Verifiability2025년 8월 28일댓글 수 로딩 중
[논문리뷰] Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop AnalysisReshmi Ghosh이 arXiv에 게시한 'Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop Analysis' 논문에 대한 자세한 리뷰입니다.#Review#Multi-hop Question Answering#Large Language Models#Reasoning Errors#Error Taxonomy#Human Evaluation#Automated Evaluation#Overthinking2025년 8월 8일댓글 수 로딩 중
[논문리뷰] LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?Yaojie Lu이 arXiv에 게시한 'LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?' 논문에 대한 자세한 리뷰입니다.#Review#LLM Agent#Tool-use#MCP#Benchmark#Large-scale#Real-world tasks#Automated Evaluation#Meta-tool-learning2025년 8월 6일댓글 수 로딩 중