
A research team led by Cornell University has developed a groundbreaking AI-powered ring that can track fingerspelling in American Sign Language (ASL) in real time.
This wearable device, called SpellRing, uses micro-sonar technology to detect subtle hand and finger movements, making it easier for people who use ASL to enter text into computers and smartphones.
Fingerspelling is used in ASL to spell out words that don’t have specific signs, such as names and technical terms.
Most existing ASL recognition devices are large and uncomfortable, making them impractical for everyday use. SpellRing aims to change that by offering a small, wearable solution.
Hyunchul Lim, a doctoral student at Cornell and the lead researcher, said the goal was to create a single, simple ring that could capture all the complex movements of fingerspelling.
The result is a device worn on the thumb, containing a microphone and speaker that send and receive inaudible sound waves to track movements.
A tiny gyroscope inside also helps detect hand motion.
The data collected by these sensors is processed by a deep-learning AI system, which predicts the fingerspelled letters with high accuracy. Unlike other systems that require large equipment, SpellRing is compact—about the size of a U.S. quarter.
Tested and Proven
To test its performance, the researchers asked 20 ASL users, both beginners and experts, to fingerspell more than 20,000 words while wearing SpellRing.
The device correctly identified between 82% and 92% of the letters, depending on word difficulty. These results are comparable to larger, more complex ASL recognition systems.
One challenge in developing SpellRing was training the AI to recognize the 26 different hand shapes for each letter of the alphabet.
ASL users often adjust their fingerspelling for speed and comfort, making recognition more difficult. However, the research team was able to bridge this gap by refining their machine learning algorithms.
Looking Ahead: Full ASL Translation
While SpellRing is a major step forward, it only tracks fingerspelling, which is just a small part of ASL. The research team, including linguistics expert Jane Lu, acknowledges that there is still a long way to go before creating a device that can fully understand ASL, including facial expressions and body movements.
Future plans for the project include integrating the technology into eyeglasses to capture facial expressions, upper body movements, and head gestures. Since ASL is a rich visual language that relies on more than just hands, this could bring us closer to a complete ASL translation system.
“ASL is complex and expressive,” said Lim, who even took ASL courses to better understand the language. “We hope to make technology more inclusive and accessible for the deaf and hard-of-hearing community.”
SpellRing will be presented at the Association of Computing Machinery’s CHI conference in Japan this April, marking an exciting step toward improving ASL communication technology.