[논문리뷰] Specialization after Generalization: Towards Understanding Test-Time Training in Foundation ModelsarXiv에 게시된 'Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models' 논문에 대한 자세한 리뷰입니다.#Review#Test-Time Training (TTT)#Foundation Models#Underparameterization#Sparse Autoencoders (SAE)#Linear Representation Hypothesis (LRH)#Specialization#Scaling Laws#In-Distribution Data2025년 10월 1일댓글 수 로딩 중
[논문리뷰] CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement LearningJianze Liang이 arXiv에 게시한 'CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning' 논문에 대한 자세한 리뷰입니다.#Review#GUI Agents#Reinforcement Learning#Planner-Executor Architecture#Decoupled Training#Large Vision-Language Models#Specialization#Generalization#Computer Use Agent2025년 8월 28일댓글 수 로딩 중