[논문리뷰] Xpertbench: Expert Level Tasks with Rubrics-Based Evaluation본 논문은 1,000명 이상의 현업 전문가가 참여하여 구축한 1,346개의 전문 작업으로 구성된 XpertBench 프레임워크를 제안한다. 평가 신뢰성을 위해 각 작업은 15~40개의 가중치가 부여된 원자적 체크포인트 기반의 Rubrics를 따르며, 이를 평가하기 위해 ShotJudge 패러다임을 도입했다.#Review#XpertBench#LLM Evaluation#Expert-level Cognition#Rubrics-based Assessment#ShotJudge#Ecological Validity2026년 4월 5일댓글 수 로딩 중
[논문리뷰] RubricBench: Aligning Model-Generated Rubrics with Human StandardsarXiv에 게시된 'RubricBench: Aligning Model-Generated Rubrics with Human Standards' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Reward Models#Rubric-Guided Evaluation#Benchmarks#Model Alignment#Human Standards#Cognitive Misalignment2026년 3월 2일댓글 수 로딩 중
[논문리뷰] LongCLI-Bench: A Preliminary Benchmark and Study for Long-horizon Agentic Programming in Command-Line InterfacesChuanhao Li이 arXiv에 게시한 'LongCLI-Bench: A Preliminary Benchmark and Study for Long-horizon Agentic Programming in Command-Line Interfaces' 논문에 대한 자세한 리뷰입니다.#Review#Agentic Programming#CLI#Benchmark#Long-horizon Tasks#Code Generation#LLM Evaluation#Human-Agent Collaboration#Software Engineering2026년 2월 24일댓글 수 로딩 중
[논문리뷰] Implicit Intelligence -- Evaluating Agents on What Users Don't SayMarc Wetter이 arXiv에 게시한 'Implicit Intelligence -- Evaluating Agents on What Users Don't Say' 논문에 대한 자세한 리뷰입니다.#Review#Implicit Intelligence#AI Agents#Agent-as-a-World#Contextual Reasoning#Safety#Privacy#Accessibility#LLM Evaluation2026년 2월 24일댓글 수 로딩 중
[논문리뷰] DREAM: Deep Research Evaluation with Agentic MetricsarXiv에 게시된 'DREAM: Deep Research Evaluation with Agentic Metrics' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research Evaluation#Agentic Evaluation#LLM Evaluation#Capability Parity#Factuality#Temporal Validity#Reasoning Quality#Research Agents#Mirage of Synthesis2026년 2월 24일댓글 수 로딩 중
[논문리뷰] EcoGym: Evaluating LLMs for Long-Horizon Plan-and-Execute in Interactive EconomiesYishuo Yuan이 arXiv에 게시한 'EcoGym: Evaluating LLMs for Long-Horizon Plan-and-Execute in Interactive Economies' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Long-Horizon Planning#Interactive Economies#Benchmark#Agentic AI#Economic Simulation#Plan-and-Execute2026년 2월 11일댓글 수 로딩 중
[논문리뷰] Judging What We Cannot Solve: A Consequence-Based Approach for Oracle-Free Evaluation of Research-Level MathAmit Agarwal이 arXiv에 게시한 'Judging What We Cannot Solve: A Consequence-Based Approach for Oracle-Free Evaluation of Research-Level Math' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Mathematical Reasoning#Oracle-Free Validation#Consequence-Based Utility#Solution Quality#In-Context Learning#Research-Level Math2026년 2월 8일댓글 수 로딩 중
[논문리뷰] Learning Query-Specific Rubrics from Human Preferences for DeepResearch Report GenerationarXiv에 게시된 'Learning Query-Specific Rubrics from Human Preferences for DeepResearch Report Generation' 논문에 대한 자세한 리뷰입니다.#Review#DeepResearch#Rubric Generation#Human Preferences#Reinforcement Learning#Multi-agent Systems#LLM Evaluation#Reward Modeling2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Wiki Live Challenge: Challenging Deep Research Agents with Expert-Level Wikipedia ArticlesarXiv에 게시된 'Wiki Live Challenge: Challenging Deep Research Agents with Expert-Level Wikipedia Articles' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research Agents#LLM Evaluation#Wikipedia#Good Articles#Factuality#Writing Quality#Benchmark#Hallucinations#Verifiability2026년 2월 2일댓글 수 로딩 중
[논문리뷰] DSGym: A Holistic Framework for Evaluating and Training Data Science AgentsYongchan Kwon이 arXiv에 게시한 'DSGym: A Holistic Framework for Evaluating and Training Data Science Agents' 논문에 대한 자세한 리뷰입니다.#Review#Data Science Agents#LLM Evaluation#Benchmark Framework#Execution-Grounded Training#Bioinformatics#Kaggle#Shortcut Filtering#Synthetic Data2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line InterfacesHarsh Raj이 arXiv에 게시한 'Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line Interfaces' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#LLM Evaluation#Benchmarking#Command Line Interface#Software Engineering#Realistic Tasks#Error Analysis2026년 1월 22일댓글 수 로딩 중
[논문리뷰] SciCoQA: Quality Assurance for Scientific Paper--Code AlignmentarXiv에 게시된 'SciCoQA: Quality Assurance for Scientific Paper--Code Alignment' 논문에 대한 자세한 리뷰입니다.#Review#Reproducibility#Paper-Code Discrepancy#Code Alignment#LLM Evaluation#Synthetic Data Generation#Quality Assurance#Scientific Automation2026년 1월 20일댓글 수 로딩 중
[논문리뷰] KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital CompanionsChenglong Li이 arXiv에 게시한 'KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions' 논문에 대한 자세한 리뷰입니다.#Review#Person Understanding#Lifelong Digital Companions#Memory Benchmarking#Autobiographical Narratives#Cognitive Stream#Flashback Handling#LLM Evaluation#Hierarchical Reasoning2026년 1월 13일댓글 수 로딩 중
[논문리뷰] Agent-as-a-JudgeMeng Liu이 arXiv에 게시한 'Agent-as-a-Judge' 논문에 대한 자세한 리뷰입니다.#Review#Agent-as-a-Judge#LLM Evaluation#Multi-Agent Systems#Tool Integration#AI Alignment#Automated Assessment#Survey2026년 1월 8일댓글 수 로딩 중
[논문리뷰] EpiQAL: Benchmarking Large Language Models in Epidemiological Question Answering for Enhanced Alignment and ReasoningGuanchen Wu이 arXiv에 게시한 'EpiQAL: Benchmarking Large Language Models in Epidemiological Question Answering for Enhanced Alignment and Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Epidemiological Question Answering#Large Language Models#Benchmark#Multi-step Inference#Evidence Grounding#LLM Evaluation#Public Health AI#Chain-of-Thought2026년 1월 7일댓글 수 로딩 중
[논문리뷰] COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMsarXiv에 게시된 'COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Policy Alignment#Organizational Policies#AI Safety#Adversarial Robustness#Refusal Behavior#Prompt Engineering#Fine-tuning2026년 1월 5일댓글 수 로딩 중
[논문리뷰] InfoSynth: Information-Guided Benchmark Synthesis for LLMsarXiv에 게시된 'InfoSynth: Information-Guided Benchmark Synthesis for LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Benchmark Synthesis#LLM Evaluation#Code Generation#Information Theory#Genetic Algorithms#Novelty Metrics#Diversity Metrics2026년 1월 4일댓글 수 로딩 중
[논문리뷰] LLM Swiss Round: Aggregating Multi-Benchmark Performance via Competitive Swiss-System DynamicsarXiv에 게시된 'LLM Swiss Round: Aggregating Multi-Benchmark Performance via Competitive Swiss-System Dynamics' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Competitive Ranking#Swiss-System#Monte Carlo Simulation#Failure Sensitivity Analysis#Robustness#Multi-Benchmark2025년 12월 24일댓글 수 로딩 중
[논문리뷰] The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model FactualityarXiv에 게시된 'The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Factuality Benchmark#Multimodal AI#Knowledge Grounding#Parametric Knowledge#Retrieval Augmented Generation#Automated Scoring2025년 12월 11일댓글 수 로딩 중
[논문리뷰] IndicParam: Benchmark to evaluate LLMs on low-resource Indic LanguagesarXiv에 게시된 'IndicParam: Benchmark to evaluate LLMs on low-resource Indic Languages' 논문에 대한 자세한 리뷰입니다.#Review#Low-resource Languages#Indic Languages#LLM Evaluation#Benchmark#Multilingual LLMs#Question Answering#Cross-lingual Transfer2025년 12월 1일댓글 수 로딩 중
[논문리뷰] From Proof to Program: Characterizing Tool-Induced Reasoning Hallucinations in Large Language ModelsarXiv에 게시된 'From Proof to Program: Characterizing Tool-Induced Reasoning Hallucinations in Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Tool-augmented LLMs#Reasoning Hallucinations#Tool-Induced Myopia (TIM)#Code Interpreter#Mathematical Reasoning#LLM Evaluation#Preference Optimization2025년 11월 16일댓글 수 로딩 중
[논문리뷰] DiscoX: Benchmarking Discourse-Level Translation task in Expert DomainsarXiv에 게시된 'DiscoX: Benchmarking Discourse-Level Translation task in Expert Domains' 논문에 대한 자세한 리뷰입니다.#Review#Discourse-Level Translation#Expert Domains#Benchmarking#LLM Evaluation#Reference-Free Metric#Chinese-English Translation#Contextual Coherence#Domain-Specific Terminology2025년 11월 16일댓글 수 로딩 중
[논문리뷰] ResearchRubrics: A Benchmark of Prompts and Rubrics For Evaluating Deep Research AgentsarXiv에 게시된 'ResearchRubrics: A Benchmark of Prompts and Rubrics For Evaluating Deep Research Agents' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research Agents#LLM Evaluation#Benchmark#Rubrics#Multi-step Reasoning#Cross-document Synthesis#AI Performance#Task Complexity2025년 11월 13일댓글 수 로딩 중
[논문리뷰] LiveTradeBench: Seeking Real-World Alpha with Large Language ModelsJiaxuan You이 arXiv에 게시한 'LiveTradeBench: Seeking Real-World Alpha with Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Live Trading#Portfolio Management#Financial AI#Prediction Markets#Real-World Uncertainty#Agent Benchmarking2025년 11월 9일댓글 수 로딩 중
[논문리뷰] LTD-Bench: Evaluating Large Language Models by Letting Them DrawarXiv에 게시된 'LTD-Bench: Evaluating Large Language Models by Letting Them Draw' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Spatial Reasoning#Benchmark#Generative AI#Visual Perception#Spatial Imagination#Code Generation2025년 11월 9일댓글 수 로딩 중
[논문리뷰] AMO-Bench: Large Language Models Still Struggle in High School Math CompetitionsarXiv에 게시된 'AMO-Bench: Large Language Models Still Struggle in High School Math Competitions' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Mathematical Reasoning#Olympiad-level Math#Benchmark#Performance Saturation#Test-time Scaling#AMO-Bench2025년 10월 31일댓글 수 로딩 중
[논문리뷰] AstaBench: Rigorous Benchmarking of AI Agents with a Scientific Research SuiteBhavana Dalvi이 arXiv에 게시한 'AstaBench: Rigorous Benchmarking of AI Agents with a Scientific Research Suite' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Benchmarking#Scientific Research#LLM Evaluation#Agentic AI#Tool Use#Reproducibility#Cost-Aware Evaluation2025년 10월 27일댓글 수 로딩 중
[논문리뷰] ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test CasesNicholas Carlini이 arXiv에 게시한 'ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Reward Hacking#Benchmark Reliability#Test Exploitation#Prompt Engineering#LLM Safety#Code Generation2025년 10월 24일댓글 수 로딩 중
[논문리뷰] ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and JudgearXiv에 게시된 'ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Rubric-based Benchmark#Professional Knowledge#Multi-domain Tasks#LLM-Judge Bias Mitigation#Cost Reduction#Reasoning Assessment#Open-weight Models2025년 10월 23일댓글 수 로딩 중
[논문리뷰] MorphoBench: A Benchmark with Difficulty Adaptive to Model ReasoningarXiv에 게시된 'MorphoBench: A Benchmark with Difficulty Adaptive to Model Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Reasoning Benchmark#Difficulty Adaptation#Multimodal AI#Proof Graph#Agent Recognition#Automated Question Generation2025년 10월 20일댓글 수 로딩 중
[논문리뷰] RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language ModelsarXiv에 게시된 'RefusalBench: Generative Evaluation of Selective Refusal in Grounded Language Models' 논문에 대한 자세한 리뷰입니다.#Review#RAG Systems#Selective Refusal#Generative Evaluation#Linguistic Perturbations#LLM Evaluation#Informational Uncertainty#Model Calibration#AI Safety2025년 10월 17일댓글 수 로딩 중
[논문리뷰] RAGCap-Bench: Benchmarking Capabilities of LLMs in Agentic Retrieval Augmented Generation SystemsarXiv에 게시된 'RAGCap-Bench: Benchmarking Capabilities of LLMs in Agentic Retrieval Augmented Generation Systems' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Retrieval Augmented Generation#Agentic Systems#Benchmarking#Intermediate Tasks#Error Analysis#LLM Evaluation2025년 10월 17일댓글 수 로딩 중
[논문리뷰] BigCodeArena: Unveiling More Reliable Human Preferences in Code Generation via ExecutionHange Liu이 arXiv에 게시한 'BigCodeArena: Unveiling More Reliable Human Preferences in Code Generation via Execution' 논문에 대한 자세한 리뷰입니다.#Review#Code Generation#Human Preference#LLM Evaluation#Execution Feedback#Benchmarking#Crowdsourcing#Software Engineering#Large Language Models2025년 10월 13일댓글 수 로딩 중
[논문리뷰] BIRD-INTERACT: Re-imagining Text-to-SQL Evaluation for Large Language Models via Lens of Dynamic InteractionsShipei Lin이 arXiv에 게시한 'BIRD-INTERACT: Re-imagining Text-to-SQL Evaluation for Large Language Models via Lens of Dynamic Interactions' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-SQL#LLM Evaluation#Multi-turn Interaction#Dynamic Environment#User Simulator#Ambiguity Resolution#LLM Agents2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Epistemic Diversity and Knowledge Collapse in Large Language ModelsarXiv에 게시된 'Epistemic Diversity and Knowledge Collapse in Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Epistemic Diversity#Knowledge Collapse#Homogenization#Retrieval-Augmented Generation#LLM Evaluation#Information Diversity#Cultural Bias2025년 10월 7일댓글 수 로딩 중
[논문리뷰] Probing the Critical Point (CritPt) of AI Reasoning: a Frontier Physics Research BenchmarkPenghao Zhu이 arXiv에 게시한 'Probing the Critical Point (CritPt) of AI Reasoning: a Frontier Physics Research Benchmark' 논문에 대한 자세한 리뷰입니다.#Review#AI Reasoning#Physics Research#LLM Evaluation#Scientific Benchmark#Frontier Physics#Problem Solving#Model Reliability#Auto-grading2025년 10월 1일댓글 수 로딩 중
[논문리뷰] StyleBench: Evaluating thinking styles in Large Language ModelsJavad Lavaei이 arXiv에 게시한 'StyleBench: Evaluating thinking styles in Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reasoning Strategies#Prompt Engineering#LLM Evaluation#Benchmark#Thinking Styles#Scaling Laws#Meta-Reasoning2025년 9월 26일댓글 수 로딩 중
[논문리뷰] FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questionstengdai722이 arXiv에 게시한 'FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions' 논문에 대한 자세한 리뷰입니다.#Review#Large Reasoning Models#LLM Evaluation#Multimodal AI#Reasoning Behaviors#Hallucination#Contamination-Free#AI Safety#Instruction Following2025년 9월 23일댓글 수 로딩 중
[논문리뷰] DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India: Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian ContextMaunendra Sankar Desarkar이 arXiv에 게시한 'DIWALI - Diversity and Inclusivity aWare cuLture specific Items for India: Dataset and Assessment of LLMs for Cultural Text Adaptation in Indian Context' 논문에 대한 자세한 리뷰입니다.#Review#Cultural Adaptation#Large Language Models#Indian Culture#Dataset Creation#CSI#Human Evaluation#LLM Evaluation#Cultural Bias2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMsKatharina von der Wense이 arXiv에 게시한 'Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Multiple-Choice QA#Tokenization#Prompt Sensitivity#Accuracy#Calibration#Model Ranking2025년 9월 19일댓글 수 로딩 중
[논문리뷰] MCP-AgentBench: Evaluating Real-World Language Agent Performance with MCP-Mediated ToolsXiaorui Wang이 arXiv에 게시한 'MCP-AgentBench: Evaluating Real-World Language Agent Performance with MCP-Mediated Tools' 논문에 대한 자세한 리뷰입니다.#Review#Language Agents#Tool Use#Benchmarks#Model Context Protocol (MCP)#LLM Evaluation#Agentic AI#Real-World Performance2025년 9월 15일댓글 수 로딩 중
[논문리뷰] HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI AssistantsJacy Reese Anthis이 arXiv에 게시한 'HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants' 논문에 대한 자세한 리뷰입니다.#Review#Human Agency#AI Assistants#LLM Evaluation#Benchmark#Sociotechnical AI#AI Alignment#Scalable Evaluation2025년 9월 11일댓글 수 로딩 중
[논문리뷰] On Robustness and Reliability of Benchmark-Based Evaluation of LLMsKevin Roitero이 arXiv에 게시한 'On Robustness and Reliability of Benchmark-Based Evaluation of LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Model Robustness#Benchmark Reliability#Paraphrasing#Linguistic Variability#Generalization#Question Answering2025년 9월 8일댓글 수 로딩 중
[논문리뷰] DeepResearch Arena: The First Exam of LLMs' Research Abilities via Seminar-Grounded TasksJiaxuan Lu이 arXiv에 게시한 'DeepResearch Arena: The First Exam of LLMs' Research Abilities via Seminar-Grounded Tasks' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Research Agents#Benchmark#Multi-Agent System#Seminar-Grounded Tasks#Data Leakage Prevention#Ill-Structured Problems2025년 9월 5일댓글 수 로딩 중
[논문리뷰] A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated CodeLibo Chen이 arXiv에 게시한 'A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated Code' 논문에 대한 자세한 리뷰입니다.#Review#AI-Generated Code Security#LLM Evaluation#Repository-Level Benchmark#Code Security#Vulnerability Detection#Static Analysis#Reproducibility#Context-Awareness2025년 9월 1일댓글 수 로딩 중
[논문리뷰] ReportBench: Evaluating Deep Research Agents via Academic Survey TasksKai Jia이 arXiv에 게시한 'ReportBench: Evaluating Deep Research Agents via Academic Survey Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Deep Research Agents#LLM Evaluation#Academic Survey#Factual Accuracy#Citation Verification#Report Generation#Benchmark#Hallucination2025년 8월 27일댓글 수 로딩 중
[논문리뷰] UQ: Assessing Language Models on Unsolved QuestionsWei Liu이 arXiv에 게시한 'UQ: Assessing Language Models on Unsolved Questions' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Unsolved Questions#AI Benchmark#Oracle-Free Validation#Generator-Validator Gap#Community Evaluation#Stack Exchange2025년 8월 26일댓글 수 로딩 중
[논문리뷰] InMind: Evaluating LLMs in Capturing and Applying Individual Human Reasoning StylesDiping Song이 arXiv에 게시한 'InMind: Evaluating LLMs in Capturing and Applying Individual Human Reasoning Styles' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Human Reasoning Styles#Social Deduction Games#Theory of Mind#Adaptive Reasoning#Avalon Game#Cognitive Grounding2025년 8월 25일댓글 수 로딩 중
[논문리뷰] AetherCode: Evaluating LLMs' Ability to Win In Premier Programming CompetitionsYidi Du이 arXiv에 게시한 'AetherCode: Evaluating LLMs' Ability to Win In Premier Programming Competitions' 논문에 대한 자세한 리뷰입니다.#Review#Competitive Programming#LLM Evaluation#Code Reasoning#Benchmark#Test Case Generation#Programming Competitions#Algorithmic Problems2025년 8월 25일댓글 수 로딩 중
[논문리뷰] mSCoRe: a Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoninganoperson이 arXiv에 게시한 'mSCoRe: a Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Multilingual Benchmark#Commonsense Reasoning#LLM Evaluation#Reasoning Taxonomy#Benchmark Scaling#Data Synthesis#Cultural Nuances2025년 8월 21일댓글 수 로딩 중
[논문리뷰] From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language ModelsZiyan Kuang이 arXiv에 게시한 'From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Financial LLMs#Cognitive Diagnosis Model#LLM Evaluation#Knowledge Assessment#Matrix Factorization#CPA-QKA#Interpretability2025년 8월 21일댓글 수 로딩 중
[논문리뷰] HeroBench: A Benchmark for Long-Horizon Planning and Structured Reasoning in Virtual WorldsArtyom Sorokin이 arXiv에 게시한 'HeroBench: A Benchmark for Long-Horizon Planning and Structured Reasoning in Virtual Worlds' 논문에 대한 자세한 리뷰입니다.#Review#Long-Horizon Planning#Structured Reasoning#LLM Evaluation#Virtual Worlds#RPG#Benchmark#Agent Systems#Combat Simulation2025년 8월 19일댓글 수 로딩 중
[논문리뷰] Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press DiplomacyElizabeth Karpinski이 arXiv에 게시한 'Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press Diplomacy' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Diplomacy Game#Multi-agent Systems#Strategic Reasoning#LLM Evaluation#Prompt Engineering#Behavioral Analysis#Game AI2025년 8월 13일댓글 수 로딩 중
[논문리뷰] UserBench: An Interactive Gym Environment for User-Centric AgentsJianguo Zhang이 arXiv에 게시한 'UserBench: An Interactive Gym Environment for User-Centric Agents' 논문에 대한 자세한 리뷰입니다.#Review#User-Centric AI#LLM Evaluation#Interactive Agents#Gym Environment#Preference Elicitation#Multi-turn Dialogue#Tool Use2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Are Today's LLMs Ready to Explain Well-Being Concepts?Huan Liu이 arXiv에 게시한 'Are Today's LLMs Ready to Explain Well-Being Concepts?' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Well-being Concepts#LLM Evaluation#Principle-Guided Evaluation#LLM-as-a-Judge#Supervised Fine-Tuning (SFT)#Direct Preference Optimization (DPO)#Explanation Generation2025년 8월 8일댓글 수 로딩 중
[논문리뷰] CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome RewardSongyang Gao이 arXiv에 게시한 'CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Answer Verification#Reward Model#Benchmarking#Data Augmentation#Reinforcement Learning#Formula Verification#Hallucination Detection2025년 8월 6일댓글 수 로딩 중
[논문리뷰] C3: A Bilingual Benchmark for Spoken Dialogue Models Exploring Challenges in Complex ConversationsYiwen Guo이 arXiv에 게시한 'C3: A Bilingual Benchmark for Spoken Dialogue Models Exploring Challenges in Complex Conversations' 논문에 대한 자세한 리뷰입니다.#Review#Spoken Dialogue Models#Bilingual Benchmark#Complex Conversations#Ambiguity Resolution#Context Understanding#LLM Evaluation#Human-Computer Interaction2025년 8월 2일댓글 수 로딩 중