PaperPulse logo
FeedTopicsAI Researcher FeedBlogPodcastAccount

Stay Updated

Get the latest research delivered to your inbox

Platform

  • Home
  • About Us
  • Search Papers
  • Research Topics
  • Researcher Feed

Resources

  • Newsletter
  • Blog
  • Podcast
PaperPulse•

AI-powered research discovery platform

© 2024 PaperPulse. All rights reserved.

Object-centric 3D Motion Field for Robot Learning from Human Videos

arXivSource

Zhao-Heng Yin, Sherry Yang, Pieter Abbeel

cs.AI
|
Jun 4, 2025
1 views

One-line Summary

The paper introduces an object-centric 3D motion field representation for extracting actionable insights from human videos to improve robot learning, achieving significantly better performance in real-world tasks compared to prior methods.

Plain-language Overview

Researchers are exploring ways to teach robots by observing human actions in videos, but capturing the necessary details from these videos is challenging. This study proposes a new way to represent actions using a '3D motion field' that focuses on objects, which helps robots learn more effectively from human demonstrations. The approach involves a novel training method to accurately capture object movements, even when video quality is poor, and a prediction model that helps the robot apply what it learns to different situations. Tests show that this new method significantly improves the robot's ability to understand and replicate human actions, even in complex tasks.

Technical Details