[논문리뷰] EgoActor: Grounding Task Planning into Spatial-aware Egocentric Actions for Humanoid Robots via Visual-Language ModelsZiyi Bai이 arXiv에 게시한 'EgoActor: Grounding Task Planning into Spatial-aware Egocentric Actions for Humanoid Robots via Visual-Language Models' 논문에 대한 자세한 리뷰입니다.#Review#Humanoid Robots#Vision-Language Models#Task Planning#Egocentric Control#Mobile Manipulation#Active Perception#Human-Robot Interaction#Real-World Deployment2026년 2월 4일댓글 수 로딩 중
[논문리뷰] VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active DialogsXihui Liu이 arXiv에 게시한 'VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Vision and Language Navigation#Instance Object Navigation#Active Dialog#Large Language Models (LLMs)#Benchmark#Human-Robot Interaction2025년 12월 29일댓글 수 로딩 중
[논문리뷰] PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical IntelligencearXiv에 게시된 'PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence' 논문에 대한 자세한 리뷰입니다.#Review#Egocentric Data#Physical Intelligence#VLM#Robot Control#Embodied AI#VQA Supervision#Human-Robot Interaction#Zero-shot Transfer2025년 12월 21일댓글 수 로딩 중
[논문리뷰] An Anatomy of Vision-Language-Action Models: From Modules to Milestones and ChallengesarXiv에 게시된 'An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action Models#Embodied Intelligence#Robotics#Foundation Models#Multi-modal Learning#Reinforcement Learning#Sim-to-Real Transfer#Human-Robot Interaction2025년 12월 21일댓글 수 로딩 중
[논문리뷰] LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied OperatorarXiv에 게시된 'LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator' 논문에 대한 자세한 리뷰입니다.#Review#Robotic Agent#Large Language Models (LLMs)#Embodied AI#Task Planning#Human-Robot Interaction#General-purpose Robotics#ROS2025년 12월 14일댓글 수 로딩 중
[논문리뷰] VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and ActingHaihan Gao이 arXiv에 게시한 'VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Human-Robot Interaction#Vision-Language Models#Concurrency#Interruption#Robotics Control#Dual-Model Architecture#Special Tokens2025년 10월 28일댓글 수 로딩 중
[논문리뷰] Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn DialogueHui Zhang이 arXiv에 게시한 'Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue' 논문에 대한 자세한 리뷰입니다.#Review#Embodied AI#Human-Robot Interaction#Multi-turn Dialogue#Instruction Following#Vision-Language Models#Diffusion Models#Ambiguity Resolution#Low-level Actions2025년 9월 22일댓글 수 로딩 중
[논문리뷰] Do What? Teaching Vision-Language-Action Models to Reject the ImpossibleRoei Herzig이 arXiv에 게시한 'Do What? Teaching Vision-Language-Action Models to Reject the Impossible' 논문에 대한 자세한 리뷰입니다.#Review#Vision-Language-Action Models#Robotics#False Premise Detection#Instruction Following#Human-Robot Interaction#Clarification#Instruction Tuning2025년 8월 25일댓글 수 로딩 중