Built at MorganHacks 2026

Communication,
Unbroken.

Bidirectional ASL translation glasses. A deaf person signs — the hearing person hears. The hearing person speaks — the deaf person reads. No interpreter needed.

Try the Demo See How It Works
466M
People with disabling hearing loss
50 : 1
Deaf-to-interpreter ratio (US)
$980B
Annual global economic cost
100 yrs
Deaf literacy gap unchanged
The Problem
The world is built around hearing
Every system — education, healthcare, employment, emergency services — assumes auditory communication as the default.

Language Deprivation

90% of deaf children are born to hearing parents. 88% of those parents never learn sign language. Missing the critical window for language acquisition leads to permanent neurological changes.

Education Crisis

Median reading level at deaf high school graduation: 4th grade. This has not improved in over a century. Only 7-10% read at 7th grade level or above.

Interpreter Shortage

~2,038 certified ASL interpreters serve the entire US. That's a 50:1 ratio. The impromptu meeting, the hallway conversation, the emergency — the deaf person misses all of it.

One-Way Technology

Every existing solution is one-directional. Caption glasses show text to deaf users. ASL recognition translates signs. But nobody has built both directions into one wearable system.

Dinner Table Syndrome

Every night, the family talks, laughs, shares stories — and the deaf child sits there understanding nothing. This happens at every meal. Every gathering. Every holiday. For years. The dinner table is supposed to be where a family connects. For a deaf child in a hearing family, it's where they learn they are invisible.

The Solution
Two glasses. Full conversation.
Both people wear Dactyl glasses. Technology disappears. Connection remains.
ASL → English
Deaf person signs → hearing person's glasses capture it → AI recognizes the signs using 543 keypoints per frame → English translation plays through speaker. The hearing person hears.
English → Text
Hearing person speaks → deaf person's glasses microphone captures speech → AI converts to text → captions appear on the deaf person's lens. The deaf person reads.


Why This Design Wins

Camera Angle is Optimal

The hearing person's glasses face the deaf person directly — perfect front view of both hands, face, and body. Better than any self-mounted camera.

Right Modality for Each

Hearing person receives audio (natural for them). Deaf person receives visual text (natural for them). Nobody wears something that fights their communication mode.

Easy Half is Solved

Speech-to-text on glasses already ships ($95-$880). We only have to solve one hard problem — ASL recognition. Not two impossible things.

Socially Natural

Both people wear similar glasses. They maintain eye contact. No phone screen between them. No kiosk. Just two people having a conversation.

Technology
Built with real ML, not parlor tricks
Every component uses genuine pattern recognition — no LLM guessing from text descriptions of finger positions.
🤚
MediaPipe Holistic
543 keypoints per frame — hands, face, body tracked in real-time
📐
Angle-Based DTW
Position-invariant sign matching on 54 joint angle features with voting
🧠
Claude AI
Assembles ASL gloss into natural English sentences
🔊
Edge TTS
Neural voice speaks the translation aloud for the hearing person

The first bidirectional ASL
translation glasses.

Join us in building technology that bridges the communication gap for 466 million people.

View on GitHub