[논문리뷰] BatCoder: Self-Supervised Bidirectional Code-Documentation Learning via Back-TranslationXiaohua Wang이 arXiv에 게시한 'BatCoder: Self-Supervised Bidirectional Code-Documentation Learning via Back-Translation' 논문에 대한 자세한 리뷰입니다.#Review#Self-Supervised Learning#Code Generation#Documentation Generation#Back-Translation#Reinforcement Learning#Large Language Models (LLMs)#Code-Documentation Alignment#Low-Resource Languages2026년 2월 4일댓글 수 로딩 중
[논문리뷰] Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded UpdatesNikolaos Aletras이 arXiv에 게시한 'Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models (LLMs)#Catastrophic Forgetting#Language Adaptation#Continual Pre-training#Parameter Freezing#Low-Resource Languages#Source Knowledge Preservation2025년 12월 4일댓글 수 로딩 중
[논문리뷰] Beyond Monolingual Assumptions: A Survey of Code-Switched NLP in the Era of Large Language ModelsarXiv에 게시된 'Beyond Monolingual Assumptions: A Survey of Code-Switched NLP in the Era of Large Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Code-switching#Multilingual NLP#Large Language Models#NLP Survey#Data Augmentation#Evaluation Metrics#Low-Resource Languages2025년 10월 9일댓글 수 로딩 중
[논문리뷰] Turk-LettuceDetect: A Hallucination Detection Models for Turkish RAG ApplicationsFatma Betül Terzioğlu이 arXiv에 게시한 'Turk-LettuceDetect: A Hallucination Detection Models for Turkish RAG Applications' 논문에 대한 자세한 리뷰입니다.#Review#Hallucination Detection#Retrieval Augmented Generation#Large Language Models#Turkish NLP#Token Classification#ModernBERT#Low-Resource Languages2025년 9월 23일댓글 수 로딩 중
[논문리뷰] Hunyuan-MT Technical ReportYang Du이 arXiv에 게시한 'Hunyuan-MT Technical Report' 논문에 대한 자세한 리뷰입니다.#Review#Machine Translation#Large Language Model#Multilingual#Low-Resource Languages#Reinforcement Learning#Weak-to-Strong Learning#Slow Thinking2025년 9월 11일댓글 수 로딩 중
[논문리뷰] ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?Daeyoung Kim이 arXiv에 게시한 'ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?' 논문에 대한 자세한 리뷰입니다.#Review#Vision Language Models#Multimodal AI#Vietnamese Language#Educational Assessment#Low-Resource Languages#Cross-Lingual Reasoning#ViExam#Human-in-the-Loop2025년 8월 21일댓글 수 로딩 중
[논문리뷰] MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMsGuohang Yan이 arXiv에 게시한 'MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs' 논문에 대한 자세한 리뷰입니다.#Review#Multimodal Large Language Models#Low-Resource Languages#Cultural Groundedness#Linguistic Capability#Dataset Creation#Multilingual AI2025년 8월 11일댓글 수 로딩 중