Egocentric scene 1
Egocentric scene 2
Egocentric scene 3
Egocentric scene 4
Egocentric scene 5
Egocentric scene 6
Egocentric scene 7
Egocentric scene 8
Egocentric scene 9
Egocentric scene 10
Egocentric scene 11
Egocentric scene 12
Egocentric scene 13
Egocentric scene 14
Egocentric scene 15
Egocentric scene 16
Egocentric scene 17
Egocentric scene 18
Egocentric scene 19
Egocentric scene 20

Embodied Intelligence Data Science

The Egocentric
Data Engine

Scaled real-world egocentric demonstrations for high-quality VLA training data infrastructure

Explore Our Solutions
2K+
Task Categories
1M+
Hours Captured
200K+
Contributors

Data Annotation

Egocentric Operation Data

Capturing different people performing different tasks across varied real-world environments — reflecting the natural diversity of real human behavior.

Home Scene
Scene:Home
Action:Clean / Organize
Hand:Both
View:Egocentric

Home Scene

Capturing everyday household cleaning and organization behaviors

Kitchen Scene
Scene:Kitchen
Action:Chop / Prepare
Hand:Both
View:Egocentric

Kitchen Scene

Record common kitchen and cooking-related actions

Wall Surface
Scene:Wall
Action:Paint / Install
Hand:Both
View:Egocentric

Wall Surface

Capturing wall painting and installation-related operations

Market Scene
Scene:Market
Action:Select / Trade
Hand:Both
View:Egocentric

Market Scene

Capturing market transactions and product selection actions

Cleaning Scene
Scene:Bathroom
Action:Mop / Sweep
Hand:Both
View:Egocentric

Cleaning Scene

Recording mopping actions in household and hotel environments

The annotation design directly supports semantic-action alignment for VLA and world model training.

01 — Solutions

Why Egocentric Data

Download Sample

Industry Gap

  • Limited access to continuous, natural manipulation data.
  • Existing datasets are staged or task-constrained.
  • Simulation and teleoperation lack real-world diversity.

POV Advantage

  • Egocentric POV captures authentic human interactions.
  • Long-horizon trajectories reflect real task execution.
  • Collected from everyday workflows at scale.

FirstMove pioneers a “body-agnostic” data collection paradigm, transforming every touch, every action, and every interaction humans perform in real-world scenarios into high-quality datasets that robots can learn from.

Cloud AI Platforms

Possess powerful cloud-based model capabilities but critically lack real-world physical interaction data to train embodied intelligence systems.

Robot OEMs

Require massive volumes of scenario-specific data to improve dexterous manipulation success rates and mobile platform task performance.

Research Institutions

The maturation of multimodal foundation models makes it possible to translate human action data into robot-executable instructions at scale.

02 — Technology

Training-Ready 3D Human Behavior

From raw egocentric video to temporally consistent 3D pose and action representations

3D Pose Estimation from Egocentric Video
01

Egocentric Capture

Head-mounted cameras record continuous first-person manipulation sequences in real environments.

02

3D Pose Extraction

Multi-view reconstruction produces temporally consistent 3D hand and body pose trajectories.

03

Semantic Annotation

Action labels, object states, and task descriptions are aligned to each trajectory segment.

04

VLA-Ready Output

Structured datasets formatted for direct ingestion by Vision-Language-Action model pipelines.

03 — Team

World-Class Interdisciplinary Team

Bringing together top talent in robotics, AI algorithms, foundation models, organizational management, and data assets to build a core leadership team with global vision and deep industrial expertise.

S
CEO

Daoling Song

Co-founder and COO of Zhejiang Humanoid Robot Innovation Center. Forbes China 30 Under 30. Deep expertise in the robotics industry with proven track record in humanoid robot commercialization and strong industry influence.

L
Chief Scientist

Chang Liu

Assistant Professor at Peking University School of Engineering. Ph.D. from UC Berkeley. Former core engineer at NVIDIA Autonomous Driving. World-class academic credentials and industrial experience in embodied intelligence and robot control.

J
CTO

Jack

Former P10-level foundation model lead at a major tech company. Among China's first generation of large model training experts. Led pre-training and fine-tuning of hundred-billion-parameter models with extensive industry connections.

F
COO

Frank

Former senior executive at a leading national asset management firm. Over ten million square meters of multi-format property development and asset management experience. Leads the Thousand-Household Program and global crowdsourcing pipeline.

C
CSO

Clark

Renowned tech investor and author. Expert in AI data asset monetization and global strategy. Oversees company strategy, brand, and international operations. Published works on AI and organizational transformation.

04 — Vision

The Future of Embodied Intelligence

We believe that true embodied intelligence will not emerge from simulators alone, but from every touch, every action, and every interaction in the real world.

FirstMove’s mission is to become the most critical data infrastructure of the embodied intelligence era — continuously acquiring high-quality scarce data, continuously optimizing model capabilities, enabling every robot to understand and navigate the real world.

This is not merely a company’s vision — it is the starting point of an era.

05 — Contact

Get in Touch

Whether you are a cloud AI platform seeking high-quality embodied data, or a robot OEM requiring real-world scenario data, we look forward to accelerating the path to embodied intelligence together.

Contact Form