[논문리뷰] Decouple Searching from Training: Scaling Data Mixing via Model Merging for Large Language Model Pre-trainingHaifeng Liu이 arXiv에 게시한 'Decouple Searching from Training: Scaling Data Mixing via Model Merging for Large Language Model Pre-training' 논문에 대한 자세한 리뷰입니다.#Review#LLM Pre-training#Data Mixture Optimization#Model Merging#Proxy Models#Resource Efficiency#DeMix#Corpus Curation2026년 2월 3일댓글 수 로딩 중
[논문리뷰] Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-trainingKoustuv Sinha이 arXiv에 게시한 'Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training' 논문에 대한 자세한 리뷰입니다.#Review#LLM Visual Priors#Language Pre-training#Multimodal LLM#Data Mixture Optimization#Reasoning Prior#Perception Prior#VQA#MLE-Bench2025년 10월 1일댓글 수 로딩 중