A team at POSTECH has built a silent-speech wearable that can turn silent speech into audible words by reading tiny neck movements, and the pitch is as bold as it sounds: speak without making a sound, but still come out in your own voice. The system combines a flexible sensor, a camera, and AI to map muscle strain into speech, which could help people who have lost their voices and might also make quiet communication more practical in places where talking aloud is a bad idea.


How POSTECH’s neck sensor reads silent speech
The device is called a multiaxial strain mapping sensor, which is a very academic way of saying it watches how skin and muscle shift around the neck when someone forms words. It uses a miniature camera and flexible silicone with reference markers to detect small deformations, then recalibrates when it is moved so it can keep working during everyday wear.
That matters because older voice-restoration tools often lean on electromyography or electroencephalography gear, which is not exactly the kind of thing people want strapped to them for a casual trip to the kitchen. A lightweight neck-worn setup is much closer to something users might actually keep on, and that is usually the difference between a lab demo and a real product.
AI reconstructs the intended words
Once the sensor captures the strain patterns, AI interprets them and reconstructs the intended words or sentences. POSTECH says the system can also use voice synthesis trained on the wearer’s vocal profile, so the output sounds like the person speaking rather than a generic machine voice.
That is the smart bit. Plenty of assistive tech can produce speech, but sounding like yourself is a different class of human benefit, especially for people recovering from vocal cord damage or laryngeal surgery.
In testing, the system reportedly held up in noisy settings, including industrial environments where microphones tend to give up and make everyone stare at the ceiling. If that performance holds beyond the lab, the use case is bigger than medical accessibility: meetings, libraries, and other places where public speaking feels socially illegal suddenly get a lot more interesting.
- Wearable type: neck-mounted multiaxial strain mapping sensor
- Core components: miniature camera and flexible silicone with reference markers
- Output: reconstructed words or sentences, plus synthesized speech based on the user’s vocal profile
- Published in: Cyborg and Bionic Systems
The race to make wearables less annoying
POSTECH’s work fits a broader push in wearables toward invisible input: devices that understand intent without demanding a lot of physical effort. Apple has been circling similar territory with AirPods Pro ideas tied to silent interaction, while smart glasses makers are betting that AI will make hands-free control feel normal rather than futuristic. The common thread is simple: the best interface is often the one you barely notice.
The researchers say they want to refine accuracy and expand language support before broader deployment. The obvious next question is whether this can move from promising prototype to something people can wear all day without feeling like they are auditioning for a medical drama.

