[논문리뷰] Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive GazingDavid Eigen이 arXiv에 게시한 'Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing' 논문에 대한 자세한 리뷰입니다.#Review#Video Understanding#Multi-modal Large Language Models (MLLMs)#Vision Transformers (ViTs)#Autoregressive Gazing#Token Reduction#Multi-scale Patches#High-Resolution Video#Long-Form Video2026년 3월 24일댓글 수 로딩 중
[논문리뷰] Dynamic Chunking Diffusion TransformerarXiv에 게시된 'Dynamic Chunking Diffusion Transformer' 논문에 대한 자세한 리뷰입니다.#Review#Diffusion Transformer#Dynamic Chunking#Adaptive Patching#Image Generation#Computational Efficiency#Token Reduction#Spatial Segmentation#Load Balancing2026년 3월 8일댓글 수 로딩 중
[논문리뷰] Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input InformationKristian Kersting이 arXiv에 게시한 'Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information' 논문에 대한 자세한 리뷰입니다.#Review#LLM Reasoning#Chain-of-Thought#Prompt Engineering#Efficiency#Structured Input#Information Extraction#Cognitive Psychology#Token Reduction2025년 11월 30일댓글 수 로딩 중