[논문리뷰] Sci-CoE: Co-evolving Scientific Reasoning LLMs via Geometric Consensus with Sparse SupervisionarXiv에 게시된 'Sci-CoE: Co-evolving Scientific Reasoning LLMs via Geometric Consensus with Sparse Supervision' 논문에 대한 자세한 리뷰입니다.#Review#LLM#Scientific Reasoning#Co-evolution#Reinforcement Learning#Sparse Supervision#Geometric Consensus#Self-Play#Verifier2026년 2월 12일댓글 수 로딩 중
[논문리뷰] TriPlay-RL: Tri-Role Self-Play Reinforcement Learning for LLM Safety AlignmentarXiv에 게시된 'TriPlay-RL: Tri-Role Self-Play Reinforcement Learning for LLM Safety Alignment' 논문에 대한 자세한 리뷰입니다.#Review#LLM Safety Alignment#Reinforcement Learning#Self-Play#Red Teaming#Adversarial Training#Multi-Role Framework#Reward Hacking Mitigation2026년 1월 27일댓글 수 로딩 중
[논문리뷰] Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityarXiv에 게시된 'Teaching Models to Teach Themselves: Reasoning at the Edge of Learnability' 논문에 대한 자세한 리뷰입니다.#Review#Meta-RL#Curriculum Learning#Self-Play#LLM Reasoning#Sparse Rewards#Question Generation#Bilevel Optimization2026년 1월 26일댓글 수 로딩 중
[논문리뷰] Guided Self-Evolving LLMs with Minimal Human SupervisionarXiv에 게시된 'Guided Self-Evolving LLMs with Minimal Human Supervision' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolving LLMs#Self-Play#Reinforcement Learning#Curriculum Learning#Few-shot Learning#Human Supervision#Concept Drift#Diversity Collapse2025년 12월 2일댓글 수 로딩 중
[논문리뷰] VisPlay: Self-Evolving Vision-Language Models from ImagesarXiv에 게시된 'VisPlay: Self-Evolving Vision-Language Models from Images' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolving#Vision-Language Models#Reinforcement Learning#Self-Play#Unlabeled Data#Multimodal Reasoning#Group Relative Policy Optimization#Hallucination Mitigation2025년 11월 19일댓글 수 로딩 중
[논문리뷰] OpenSIR: Open-Ended Self-Improving ReasonerarXiv에 게시된 'OpenSIR: Open-Ended Self-Improving Reasoner' 논문에 대한 자세한 리뷰입니다.#Review#Open-Ended Learning#Self-Play#Reinforcement Learning#Large Language Models#Mathematical Reasoning#Problem Generation#Curriculum Learning#Reward Shaping2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Monopoly Deal: A Benchmark Environment for Bounded One-Sided Response Gamescavaunpeu이 arXiv에 게시한 'Monopoly Deal: A Benchmark Environment for Bounded One-Sided Response Games' 논문에 대한 자세한 리뷰입니다.#Review#Bounded One-Sided Response Games (BORGs)#Monopoly Deal#Benchmark Environment#Counterfactual Regret Minimization (CFR)#Imperfect Information Games#Game Theory#Self-Play#State Abstraction2025년 11월 9일댓글 수 로딩 중
[논문리뷰] Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-PlayJing Shi이 arXiv에 게시한 'Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language Models (VLMs)#Self-Play#Reinforcement Learning#Gamification#Data Efficiency#Strategic Reasoning#Multimodal AI#Self-Improvement2025년 10월 1일댓글 수 로딩 중
[논문리뷰] PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model ReasoningLingpeng Kong이 arXiv에 게시한 'PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model Reasoning' 논문에 대한 자세한 리뷰입니다.#Review#Prompt Synthesis#Large Language Models#Reasoning#Expectation-Maximization#Self-Play#Supervised Fine-Tuning#Task Generation#Rationale Generation2025년 9월 29일댓글 수 로딩 중
[논문리뷰] Language Self-Play For Data-Free TrainingVijai Mohan이 arXiv에 게시한 'Language Self-Play For Data-Free Training' 논문에 대한 자세한 리뷰입니다.#Review#Large Language Models#Reinforcement Learning#Self-Play#Data-Free Training#Instruction Following#Adversarial Training#Reward Modeling2025년 9월 10일댓글 수 로딩 중
[논문리뷰] Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVRYing Nian Wu이 arXiv에 게시한 'Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR' 논문에 대한 자세한 리뷰입니다.#Review#Reinforcement Learning#Large Language Models#Self-Play#Variational Problem Synthesis#Policy Entropy#Pass@k#Reasoning Benchmarks2025년 8월 25일댓글 수 로딩 중
[논문리뷰] R-Zero: Self-Evolving Reasoning LLM from Zero DataZongxia Li이 arXiv에 게시한 'R-Zero: Self-Evolving Reasoning LLM from Zero Data' 논문에 대한 자세한 리뷰입니다.#Review#Self-Evolving LLM#Reinforcement Learning#Curriculum Learning#Reasoning#Large Language Models#Self-Play#Zero-Data Training2025년 8월 8일댓글 수 로딩 중