[논문리뷰] MMDeepResearch-Bench: A Benchmark for Multimodal Deep Research AgentsSamiul Alam이 arXiv에 게시한 'MMDeepResearch-Bench: A Benchmark for Multimodal Deep Research Agents' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Deep Research#Research Agents#Benchmark#Evaluation Framework#Retrieval-Augmented Generation#Large Multimodal Models#Visual Grounding#Citation Analysis2026년 1월 21일댓글 수 로딩 중
[논문리뷰] Memorization in 3D Shape Generation: An Empirical StudyarXiv에 게시된 'Memorization in 3D Shape Generation: An Empirical Study' 논문에 대한 자세한 리뷰입니다.#Review#3D Shape Generation#Memorization#Generative Models#Diffusion Models#Evaluation Framework#Generalization#Data Augmentation2026년 1월 8일댓글 수 로딩 중
[논문리뷰] Does Understanding Inform Generation in Unified Multimodal Models? From Analysis to Path ForwardarXiv에 게시된 'Does Understanding Inform Generation in Unified Multimodal Models? From Analysis to Path Forward' 논문에 대한 자세한 리뷰입니다.#Review#Unified Multimodal Models#Understanding-Generation Gap#Reasoning#Knowledge Transfer#Chain-of-Thought#Self-Training#Synthetic Data#Evaluation Framework2025년 11월 25일댓글 수 로딩 중
[논문리뷰] Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for ExplanationsNoam Koenigstein이 arXiv에 게시한 'Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations' 논문에 대한 자세한 리뷰입니다.#Review#Saliency Maps#Explainable AI (XAI)#Taxonomy#Evaluation Framework#Faithfulness Metrics#Contrastive Explanations#Granularity2025년 11월 23일댓글 수 로딩 중
[논문리뷰] ReplicationBench: Can AI Agents Replicate Astrophysics Research Papers?Ian L. V. Roque이 arXiv에 게시한 'ReplicationBench: Can AI Agents Replicate Astrophysics Research Papers?' 논문에 대한 자세한 리뷰입니다.#Review#AI Agents#Astrophysics Research#Reproducibility Benchmark#Large Language Models#Scientific Workflow#Code Execution#Evaluation Framework2025년 10월 29일댓글 수 로딩 중
[논문리뷰] StatEval: A Comprehensive Benchmark for Large Language Models in StatisticsarXiv에 게시된 'StatEval: A Comprehensive Benchmark for Large Language Models in Statistics' 논문에 대한 자세한 리뷰입니다.#Review#Statistical Reasoning#LLM Benchmark#Statistics Education#Proof Verification#Multi-agent Pipeline#Automated Extraction#Evaluation Framework2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression MethodsYiyu Wang이 arXiv에 게시한 'Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods' 논문에 대한 자세한 리뷰입니다.#Review#Visual Token Compression#MLLMs#Evaluation Framework#Benchmarking#Downsampling#Data Filtering#Model Efficiency2025년 10월 9일댓글 수 로딩 중
[논문리뷰] AInstein: Assessing the Feasibility of AI-Generated Approaches to Research ProblemsJose Dolz이 arXiv에 게시한 'AInstein: Assessing the Feasibility of AI-Generated Approaches to Research Problems' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Scientific Problem Solving#AI Research#Iterative Refinement#Autonomous Agents#Generative AI#Evaluation Framework#Problem Extraction2025년 10월 8일댓글 수 로딩 중
[논문리뷰] HiKE: Hierarchical Evaluation Framework for Korean-English Code-Switching Speech RecognitionarXiv에 게시된 'HiKE: Hierarchical Evaluation Framework for Korean-English Code-Switching Speech Recognition' 논문에 대한 자세한 리뷰입니다.#Review#Code-Switching#Speech Recognition#Korean-English ASR#Evaluation Framework#Multilingual ASR#Loanword Processing#Fine-tuning#Hierarchical Labeling2025년 10월 7일댓글 수 로딩 중
[논문리뷰] SurveyBench: How Well Can LLM(-Agents) Write Academic Surveys?Shuo Wang이 arXiv에 게시한 'SurveyBench: How Well Can LLM(-Agents) Write Academic Surveys?' 논문에 대한 자세한 리뷰입니다.#Review#LLM#LLM Agents#Academic Survey Generation#Evaluation Framework#Benchmark#Quiz-driven Evaluation#Content Quality Metrics2025년 10월 6일댓글 수 로딩 중
[논문리뷰] Stable Cinemetrics : Structured Taxonomy and Evaluation for Professional Video GenerationarXiv에 게시된 'Stable Cinemetrics : Structured Taxonomy and Evaluation for Professional Video Generation' 논문에 대한 자세한 리뷰입니다.#Review#Video Generation#Evaluation Framework#Cinematic Control#Taxonomy#Human Annotation#Vision-Language Models#Text-to-Video2025년 10월 1일댓글 수 로딩 중
[논문리뷰] BESPOKE: Benchmark for Search-Augmented Large Language Model Personalization via Diagnostic FeedbackDongha Lee이 arXiv에 게시한 'BESPOKE: Benchmark for Search-Augmented Large Language Model Personalization via Diagnostic Feedback' 논문에 대한 자세한 리뷰입니다.#Review#Search-Augmented LLMs#Personalization#Benchmark#Diagnostic Feedback#User History#Evaluation Framework#RAG2025년 9월 26일댓글 수 로딩 중
[논문리뷰] VStyle: A Benchmark for Voice Style Adaptation with Spoken InstructionsDong Zhang이 arXiv에 게시한 'VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions' 논문에 대한 자세한 리뷰입니다.#Review#Voice Style Adaptation#Spoken Language Models#Benchmark#LALM-as-a-Judge#Speech Generation#Multilingual#Evaluation Framework2025년 9월 15일댓글 수 로딩 중
[논문리뷰] Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny AbilityYuan Wu이 arXiv에 게시한 'Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability' 논문에 대한 자세한 리뷰입니다.#Review#Large Multimodal Models#Input Scrutiny#Error Detection#Faulty Inputs#Evaluation Framework#Modality Preference#Cross-Modal Inconsistency2025년 8월 8일댓글 수 로딩 중