Backed byY Combinator

Power AI with
Human intelligence

Contribute with your unique knowledge and experiences to give AI the depth of the human mind.

0K+
CONTRIBUTORS
0+
COUNTRIES
0+
LANGUAGES
3
MODALITIES
001/HERO
// Live Network

Right now, somewhere in the world.

Hub’s contributors are recording, photographing, annotating, translating, and verifying real-world data — every minute, in 150+ countries.

0K+
CONTRIBUTORS
0+
COUNTRIES
0+
LANGUAGES
3
MODALITIES
4.20tasks completed per second
// modalities

Three signals only humans can give.

Image, video, egocentric. Captured by real people in real places. Each modality is a different door into a world your model has only read about.

raw
annotated
PERSON 94OBJECT 71KEYPOINT
01·Image

Paired before/after frames from real environments. Edge cases your auto-labeler invents but never sees.

STATIC · 0.0/sFORKLIFT · 0.82PERSON · 0.91
REC · CAM-04
00:00:42
2 OBJECTS · 28 fps
02·Video

Fixed-camera observation — warehouses, intersections, plazas. The continuous activity an auto-labeler can't follow frame to frame.

OBJECT · 0.94
REC · POV
00:00:42 / 03:18
2 HANDS · 21+21 KEYPOINTS
03·Egocentric

First-person capture from people doing real work — clinics, factories, fields. The POV your robot will inherit.

// DATA LIFECYCLE

From a phone in São Paulo to your training set.

Every clip starts as a real moment. We capture it, enrich it with metadata your model cares about, and hand-label what machines miss.

// PROOF OF WORK

Built for frontier labs. Proven at scale.

From brief to first delivery in days, not months. Millions of annotations delivered to the labs training the next generation of physical AI.

0K+
CONTRIBUTORS
0+
COUNTRIES
0+
LANGUAGES
3
MODALITIES
Backed byY Combinator
// manifesto

Real-world data starts with a real person.

What does a midwife in Kerala, a calligrapher in Kyoto, and a forklift operator in São Paulo have in common?

They’ve each spent decades developing knowledge that lives nowhere else. Not in books, not on the internet, not in any model that has been trained so far.

Data is to AI what life experience is to humans. Today’s frontier models have read the internet. They have not lived in the world. The gap between what they read and what we live is where the next generation of capability is hiding.

AI grounded in real human knowledge can read X-rays in languages doctors actually speak. It can drive in cities where street signs don’t exist. It can recognize the way a forklift driver in São Paulo signals to the team behind him.

Hub is the infrastructure for connecting humanity’s knowledge to machine intelligence. We pay real people for what they know, and we hand it to the labs building the next decade of AI.

AI raised on the breadth of human reality, not the narrow slice of it that has been digitized.

// build with hub

The infrastructure is ready.

Bring your modality, your edge cases, your timeline. We’ll bring the contributors, the QA layer, and the pipeline.

From brief to first delivery in days, not months.