[논문리뷰] ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language ModelsSean O'Brien이 arXiv에 게시한 'ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Multi-turn Conversation#Large Language Models (LLMs)#Context Management#Entropy-guided Resetting#Uncertainty Quantification#Performance Degradation#Prompt Engineering#Conversational AI2025년 10월 20일댓글 수 로딩 중
[논문리뷰] R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?arXiv에 게시된 'R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?' 논문에 대한 자세한 리뷰입니다.#Review#Long-Horizon Reasoning#Query Composition#Large Reasoning Models#Reinforcement Learning#Benchmark Evaluation#Thinking Budget#Performance Degradation#Chain-of-Thought2025년 10월 13일댓글 수 로딩 중
[논문리뷰] Understanding Embedding Scaling in Collaborative FilteringYonghui Yang이 arXiv에 게시한 'Understanding Embedding Scaling in Collaborative Filtering' 논문에 대한 자세한 리뷰입니다.#Review#Collaborative Filtering#Embedding Scaling#Noise Robustness#Recommender Systems#Graph Neural Networks#Self-supervised Learning#Performance Degradation2025년 9월 23일댓글 수 로딩 중