Researchers at MIT in the United States have pioneered a new computer interface that can accurately transcribe words that the user concentrates on, but which aren’t actually vocalised.
The device, known as AlterEgo, consists of a wearable device, including electrodes and bone-conduction headphones, and an associated computing system. The electrodes amplify neuromuscular signals in the jaw and face which are activated as words are visualised in the brain of the user, but are not actually spoken. The signals are interpreted by a machine-learning system that is able to correlate signals for words. The headphones transmit vibrations as the system responds or conveys other information, making it possible to hold a two-way conversation without ever speaking or hearing audio-waves.
The system has potential for a number of industry and professional uses, such as high-noise environments, however it could also find a strong use for disabled and medically injured people who are unable to speak clearly. Imagine Professor Stephen Hawking’s computer-based communication system, which tracked his eye movement, but without any necessary discernable input required from the subject.
In one experiment to test the device, subjects were able to play chess by passing on opponents’ moves in a match, and receive recommended counter-moves in a silent response.
Pattie Maes, an author of the paper and professor of media arts and sciences, explained:
“We basically can’t live without our cellphones, our digital devices. But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.
“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”
Using a wearable interface prototype, researchers found that their subjects, after completing just over 15 minutes of training, reported an accuracy of approximately 92 percent for a restricted vocabulary. The MIT researchers are hopeful of expanding the types and range of conversation to move towards full conversations in the not too distant future.
Research has been performed in this area for decades, with many of the limitations coming from needing to wear enormous head-gear to track brain-waves. One of the difficulties in the approach taken with AlterEgo is that each person is different, making a multi-user or user-independent system extremely difficult. Currently, each device must carefully track the individual’s unique physiology and requires training and customisation.