[논문리뷰] Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation PoliciesarXiv에 게시된 'Learning to Learn-at-Test-Time: Language Agents with Learnable Adaptation Policies' 논문에 대한 자세한 리뷰입니다.#Review#Test-Time Learning#Language Agents#Meta-Learning#Evolutionary Optimization#Adaptive Policy#LLM Agents#Prompt Engineering2026년 4월 6일댓글 수 로딩 중
[논문리뷰] BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video GenerationarXiv에 게시된 'BrandFusion: A Multi-Agent Framework for Seamless Brand Integration in Text-to-Video Generation' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Video Generation#Multi-Agent System#Brand Integration#Prompt Engineering#Large Language Models (LLMs)#LoRA Fine-tuning#Contextual Adaptation2026년 3월 10일댓글 수 로딩 중
[논문리뷰] How Controllable Are Large Language Models? A Unified Evaluation across Behavioral GranularitiesarXiv에 게시된 'How Controllable Are Large Language Models? A Unified Evaluation across Behavioral Granularities' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Controllability#Hierarchical Benchmark#Behavioral Granularity#Model Steering#Prompt Engineering#Activation-based Steering2026년 3월 3일댓글 수 로딩 중
[논문리뷰] Model Context Protocol (MCP) Tool Descriptions Are Smelly! Towards Improving AI Agent Efficiency with Augmented MCP Tool DescriptionsAhmed E. Hassan이 arXiv에 게시한 'Model Context Protocol (MCP) Tool Descriptions Are Smelly! Towards Improving AI Agent Efficiency with Augmented MCP Tool Descriptions' 논문에 대한 자세한 리뷰입니다.#Review#Model Context Protocol#AI Agents#Tool Descriptions#Software Smells#Prompt Engineering#Foundation Models#Performance Evaluation#Ablation Study2026년 2월 25일댓글 수 로딩 중
[논문리뷰] Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language ModelsarXiv에 게시된 'Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Prompt Engineering#Compositional Generalization#Verifiable Rewards#Curriculum Learning#Mathematical Reasoning#Multi-task Learning2026년 2월 12일댓글 수 로딩 중
[논문리뷰] Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image ModelsarXiv에 게시된 'Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image Models' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Image Models#Spatial Intelligence#Benchmark#Evaluation#Prompt Engineering#Multimodal LLMs#Fine-tuning#Spatial Reasoning2026년 1월 29일댓글 수 로딩 중
[논문리뷰] Guidelines to Prompt Large Language Models for Code Generation: An Empirical CharacterizationGabriele Bavota이 arXiv에 게시한 'Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Code Generation#Prompt Engineering#Prompt Optimization#Empirical Study#Software Engineering#Guidelines2026년 1월 25일댓글 수 로딩 중
[논문리뷰] Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language ModelsarXiv에 게시된 'Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Engineering#Large Language Models#Causal Attention#Multiple-Choice QA#Prompt Order Sensitivity#Information Bottleneck#Decoder-only Transformers2026년 1월 21일댓글 수 로딩 중
[논문리뷰] Are LLMs Vulnerable to Preference-Undermining Attacks (PUA)? A Factorial Analysis Methodology for Diagnosing the Trade-off between Preference Alignment and Real-World ValidityChi Zhang이 arXiv에 게시한 'Are LLMs Vulnerable to Preference-Undermining Attacks (PUA)? A Factorial Analysis Methodology for Diagnosing the Trade-off between Preference Alignment and Real-World Validity' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Preference Alignment#Preference-Undermining Attacks#Factorial Analysis#Sycophancy#Prompt Engineering#Truth-Deference Trade-off2026년 1월 14일댓글 수 로딩 중
[논문리뷰] Steerability of Instrumental-Convergence Tendencies in LLMsj-hoscilowic이 arXiv에 게시한 'Steerability of Instrumental-Convergence Tendencies in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Steerability#Instrumental Convergence#AI Safety#AI Security#Open-Weight Models#Prompt Engineering#Model Control#Behavioral Alignment2026년 1월 6일댓글 수 로딩 중
[논문리뷰] COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMsarXiv에 게시된 'COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Policy Alignment#Organizational Policies#AI Safety#Adversarial Robustness#Refusal Behavior#Prompt Engineering#Fine-tuning2026년 1월 5일댓글 수 로딩 중
[논문리뷰] Toxicity Ahead: Forecasting Conversational Derailment on GitHubKostadin Damevski이 arXiv에 게시한 'Toxicity Ahead: Forecasting Conversational Derailment on GitHub' 논문에 대한 자세한 리뷰입니다.#Review#Conversational AI#Toxicity Detection#LLM#Prompt Engineering#Open Source Software#GitHub#Derailment Forecasting2025년 12월 23일댓글 수 로딩 중
[논문리뷰] Multi-LLM Thematic Analysis with Dual Reliability Metrics: Combining Cohen's Kappa and Semantic Similarity for Qualitative Research ValidationarXiv에 게시된 'Multi-LLM Thematic Analysis with Dual Reliability Metrics: Combining Cohen's Kappa and Semantic Similarity for Qualitative Research Validation' 논문에 대한 자세한 리뷰입니다.#Review#Thematic Analysis#Large Language Models#Qualitative Research#Cohen's Kappa#Semantic Similarity#Reliability Metrics#Ensemble Validation#Prompt Engineering2025년 12월 23일댓글 수 로딩 중
[논문리뷰] Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language PerspectivesSujata Ghosh이 arXiv에 게시한 'Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language Perspectives' 논문에 대한 자세한 리뷰입니다.#Review#Syllogistic Reasoning#Large Language Models (LLMs)#Belief Bias#Natural Language Understanding (NLU)#Formal Logic#Prompt Engineering#Self-Consistency#Cognitive Psychology2025년 12월 22일댓글 수 로딩 중
[논문리뷰] Structured Extraction from Business Process Diagrams Using Vision-Language ModelsBarry Devereux이 arXiv에 게시한 'Structured Extraction from Business Process Diagrams Using Vision-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#BPMN Extraction#Structured Information Extraction#OCR Enrichment#Prompt Engineering#Diagram Understanding#Business Process Management2025년 12월 1일댓글 수 로딩 중
[논문리뷰] PromptBridge: Cross-Model Prompt Transfer for Large Language ModelsWei Wei이 arXiv에 게시한 'PromptBridge: Cross-Model Prompt Transfer for Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Prompt Engineering#Model Drifting#Prompt Transfer#Cross-Model Adaptation#Training-Free#Prompt Optimization#MAP-RPE2025년 12월 1일댓글 수 로딩 중
[논문리뷰] Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input InformationKristian Kersting이 arXiv에 게시한 'Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#Prompt Engineering#Efficiency#Structured Input#Information Extraction#Cognitive Psychology#Token Reduction2025년 11월 30일댓글 수 로딩 중
[논문리뷰] SAM 3: Segment Anything with ConceptsarXiv에 게시된 'SAM 3: Segment Anything with Concepts' 논문에 대한 자세한 리뷰입니다.#Review#Segment Anything Model#Open-Vocabulary Segmentation#Multimodal Foundation Model#Instance Segmentation#Video Object Tracking#Prompt Engineering#Data Engine#Human-in-the-loop2025년 11월 23일댓글 수 로딩 중
[논문리뷰] Large Language Models Meet Extreme Multi-label Classification: Scaling and Multi-modal FrameworkarXiv에 게시된 'Large Language Models Meet Extreme Multi-label Classification: Scaling and Multi-modal Framework' 논문에 대한 자세한 리뷰입니다.#Review#Extreme Multi-label Classification (XMC)#Large Language Models (LLMs)#Multi-modal Learning#Dual-decoder Learning#Vision Transformers#Contrastive Learning#Prompt Engineering2025년 11월 18일댓글 수 로딩 중
[논문리뷰] Large Language Models for Scientific Idea Generation: A Creativity-Centered SurveyMohammad Hossein Rohban이 arXiv에 게시한 'Large Language Models for Scientific Idea Generation: A Creativity-Centered Survey' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Scientific Discovery#Idea Generation#Creativity#Survey#AI in Science#Prompt Engineering#Multi-agent Systems#Evaluation Metrics2025년 11월 16일댓글 수 로딩 중
[논문리뷰] Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum LearningarXiv에 게시된 'Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning' 논문에 대한 자세한 리뷰입니다.#Review#Emotion Recognition in Conversation#Large Language Models#Prompt Engineering#Demonstration Retrieval#Curriculum Learning#Fine-tuning#Affective Computing#SOTA2025년 11월 10일댓글 수 로딩 중
[논문리뷰] Jailbreaking in the HaystackAlexander Robey이 arXiv에 게시한 'Jailbreaking in the Haystack' 논문에 대한 자세한 리뷰입니다.#Review#Jailbreaking#LLM Safety#Long-Context Models#Positional Bias#Attack Success Rate (ASR)#Prompt Engineering#Compute Efficiency#AI Agents2025년 11월 9일댓글 수 로딩 중
[논문리뷰] left|,circlearrowright,text{BUS},right|: A Large and Diverse Multimodal Benchmark for evaluating the ability of Vision-Language Models to understand Rebus PuzzlesDeepiha S이 arXiv에 게시한 'left|,circlearrowright,text{BUS},right|: A Large and Diverse Multimodal Benchmark for evaluating the ability of Vision-Language Models to understand Rebus Puzzles' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models#Multimodal Benchmark#Rebus Puzzles#In-Context Learning#Reasoning#ControlNet#Prompt Engineering2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Vote-in-Context: Turning VLMs into Zero-Shot Rank FusersarXiv에 게시된 'Vote-in-Context: Turning VLMs into Zero-Shot Rank Fusers' 논문에 대한 자세한 리뷰입니다.#Review#Video Retrieval#Vision-Language Models (VLMs)#Zero-Shot Learning#List-wise Reranking#Rank Fusion#Prompt Engineering#S-Grid#Multimodal Retrieval2025년 11월 9일댓글 수 로딩 중
[논문리뷰] PatenTEB: A Comprehensive Benchmark and Model Family for Patent Text EmbeddingDenis Cavallucci이 arXiv에 게시한 'PatenTEB: A Comprehensive Benchmark and Model Family for Patent Text Embedding' 논문에 대한 자세한 리뷰입니다.#Review#Patent Text Embedding#Benchmark#Multi-task Learning#Patent Retrieval#Sentence Embeddings#Knowledge Distillation#Cross-Domain Retrieval#Prompt Engineering2025년 10월 29일댓글 수 로딩 중
[논문리뷰] ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test CasesNicholas Carlini이 arXiv에 게시한 'ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases' 논문에 대한 자세한 리뷰입니다.#Review#LLM Evaluation#Reward Hacking#Benchmark Reliability#Test Exploitation#Prompt Engineering#LLM Safety#Code Generation2025년 10월 24일댓글 수 로딩 중
[논문리뷰] Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image EditingarXiv에 게시된 'Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing' 논문에 대한 자세한 리뷰입니다.#Review#Text-Guided Image Editing#Large-Scale Dataset#Multimodal Models#Dataset Curation#Quality Control#Prompt Engineering#Preference Learning#Multi-Turn Editing2025년 10월 23일댓글 수 로딩 중
[논문리뷰] UniGenBench++: A Unified Semantic Evaluation Benchmark for Text-to-Image GenerationYujie Zhou이 arXiv에 게시한 'UniGenBench++: A Unified Semantic Evaluation Benchmark for Text-to-Image Generation' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-Image Generation#Semantic Evaluation#Benchmark#Multilingual Evaluation#Fine-grained Assessment#Large Language Models#Model Evaluation#Prompt Engineering2025년 10월 22일댓글 수 로딩 중
[논문리뷰] Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMsKevin Zhu이 arXiv에 게시한 'Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMs' 논문에 대한 자세한 리뷰입니다.#Review#Emergent Misalignment#In-Context Learning#LLM Safety#Persona Rationalization#Prompt Engineering#Model Alignment2025년 10월 20일댓글 수 로딩 중
[논문리뷰] ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language ModelsSean O'Brien이 arXiv에 게시한 'ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Conversation#Large Language Models (LLMs)#Context Management#Entropy-guided Resetting#Uncertainty Quantification#Performance Degradation#Prompt Engineering#Conversational AI2025년 10월 20일댓글 수 로딩 중
[논문리뷰] Deflanderization for Game Dialogue: Balancing Character Authenticity with Task Execution in LLM-based NPCsarXiv에 게시된 'Deflanderization for Game Dialogue: Balancing Character Authenticity with Task Execution in LLM-based NPCs' 논문에 대한 자세한 리뷰입니다.#Review#LLM#NPC#Game Dialogue#Persona-Grounded Dialogue#Task Execution#Prompt Engineering#Fine-tuning#Deflanderization2025년 10월 16일댓글 수 로딩 중
[논문리뷰] LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking TokensarXiv에 게시된 'LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Machine Translation (MT)#Chain-of-Thought (CoT)#Knowledge Distillation#Fine-tuning#Prompt Engineering#Synthetic Data2025년 10월 15일댓글 수 로딩 중
[논문리뷰] Temporal Prompting Matters: Rethinking Referring Video Object SegmentationSifei Liu이 arXiv에 게시한 'Temporal Prompting Matters: Rethinking Referring Video Object Segmentation' 논문에 대한 자세한 리뷰입니다.#Review#Referring Video Object Segmentation#Foundation Models#Prompt Engineering#Object Tracking#SAM#Video Analysis#Prompt Preference Learning2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMsarXiv에 게시된 'Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal AI#Prompt Optimization#MLLMs#Bayesian Optimization#Cross-modal Alignment#Prompt Engineering#Generative AI#Exploration-Exploitation2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Benchmark It Yourself (BIY): Preparing a Dataset and Benchmarking AI Models for Scatterplot-Related TasksPedro Bizarro이 arXiv에 게시한 'Benchmark It Yourself (BIY): Preparing a Dataset and Benchmarking AI Models for Scatterplot-Related Tasks' 논문에 대한 자세한 리뷰입니다.#Review#Scatterplot Analysis#AI Benchmarking#Multimodal LLMs#Synthetic Data Generation#Cluster Detection#Outlier Detection#Data Visualization#Prompt Engineering2025년 10월 8일댓글 수 로딩 중
[논문리뷰] Agentic Context Engineering: Evolving Contexts for Self-Improving Language ModelsFenglu Hong이 arXiv에 게시한 'Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models' 논문에 대한 자세한 리뷰입니다.#Review#LLM Context Adaptation#Agentic AI#Self-Improving Systems#Prompt Engineering#Context Management#Dynamic Playbooks#Incremental Learning2025년 10월 7일댓글 수 로딩 중
[논문리뷰] BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model ResponsesJulian McAuley이 arXiv에 게시한 'BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses' 논문에 대한 자세한 리뷰입니다.#Review#LLM Bias Mitigation#Benchmark#Evaluation Metrics#Prompt Engineering#Fine-tuning#Bias-Free Score#Fairness2025년 10월 2일댓글 수 로딩 중
[논문리뷰] StyleBench: Evaluating thinking styles in Large Language ModelsJavad Lavaei이 arXiv에 게시한 'StyleBench: Evaluating thinking styles in Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reasoning Strategies#Prompt Engineering#LLM Evaluation#Benchmark#Thinking Styles#Scaling Laws#Meta-Reasoning2025년 9월 26일댓글 수 로딩 중
[논문리뷰] Zero-Shot Multi-Spectral Learning: Reimagining a Generalist Multimodal Gemini 2.5 Model for Remote Sensing ApplicationsGenady Beryozkin이 arXiv에 게시한 'Zero-Shot Multi-Spectral Learning: Reimagining a Generalist Multimodal Gemini 2.5 Model for Remote Sensing Applications' 논문에 대한 자세한 리뷰입니다.#Review#Remote Sensing#Zero-Shot Learning#Multimodal Models#Multi-spectral Imagery#Gemini 2.5#Prompt Engineering#Land Cover Classification#Pseudo-Image2025년 9월 24일댓글 수 로딩 중
[논문리뷰] Reasoning over Boundaries: Enhancing Specification Alignment via Test-time DelibrationZhilin Wang이 arXiv에 게시한 'Reasoning over Boundaries: Enhancing Specification Alignment via Test-time Delibration' 논문에 대한 자세한 리뷰입니다.#Review#LLMs#Specification Alignment#Test-Time Deliberation#Safety-Behavior Trade-off#ALIGN3#SPECBENCH#Prompt Engineering2025년 9월 19일댓글 수 로딩 중
[논문리뷰] CLIPSym: Delving into Symmetry Detection with CLIPRaymond A. Yeh이 arXiv에 게시한 'CLIPSym: Delving into Symmetry Detection with CLIP' 논문에 대한 자세한 리뷰입니다.#Review#Symmetry Detection#Vision-Language Models#CLIP#Equivariant Networks#Prompt Engineering#Geometric Deep Learning2025년 9월 1일댓글 수 로딩 중
[논문리뷰] Prompt Orchestration Markup LanguageYuqing Yang이 arXiv에 게시한 'Prompt Orchestration Markup Language' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Engineering#Large Language Models#Markup Language#Structured Prompting#IDE Support#Multimodal Data#Styling System#Development Toolkit2025년 8월 20일댓글 수 로딩 중
[논문리뷰] Leveraging Large Language Models for Predictive Analysis of Human MiseryAbhilash Nandy이 arXiv에 게시한 'Leveraging Large Language Models for Predictive Analysis of Human Misery' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Affective Computing#Misery Score Prediction#Prompt Engineering#Few-shot Learning#Gamified Evaluation#Feedback-driven Adaptation2025년 8월 20일댓글 수 로딩 중
[논문리뷰] Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press DiplomacyElizabeth Karpinski이 arXiv에 게시한 'Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press Diplomacy' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Diplomacy Game#Multi-agent Systems#Strategic Reasoning#LLM Evaluation#Prompt Engineering#Behavioral Analysis#Game AI2025년 8월 13일댓글 수 로딩 중
[논문리뷰] A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic SystemsXinhao Yi이 arXiv에 게시한 'A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolving AI Agents#Lifelong Learning#Foundation Models#Multi-Agent Systems#Agent Optimization#Prompt Engineering#Tool Use#AI Safety#Survey2025년 8월 12일댓글 수 로딩 중
[논문리뷰] Sel3DCraft: Interactive Visual Prompts for User-Friendly Text-to-3D GenerationHao Huang이 arXiv에 게시한 'Sel3DCraft: Interactive Visual Prompts for User-Friendly Text-to-3D Generation' 논문에 대한 자세한 리뷰입니다.#Review#Text-to-3D Generation#Prompt Engineering#Visual Analytics#Human-Computer Interaction#Multi-modal Large Language Models#3D Model Evaluation2025년 8월 7일댓글 수 로딩 중