Editor’s Note: A new ear-level device being developed by a UK primary care physician, along with a team of researchers at Bath University, is providing hope for people with neurological conditions that are unable speak or operate a keyboard, with new ways of communicating via a computer.
HHTM President and CEO, Kevin Liebe, recently connected with Dr. Nick Gompertz, Director of Earswitch Ltd, to discuss the technology.
KL: To start, could you provide readers with some of your background and how you came up with the Earswitch concept?
I had always known I could control a muscle in my ear & so wondered if there a way that this could be harnessed to allow people with ALS and other “locked- in syndromes” to communicate.
It was 3 years ago, whilst watching a documentary, that I realised how to make this happen. The documentary was on an amazing 13 year old who had written a book by looking at individual letters on a physical spelling board , despite being non verbal due to severe involuntary movements due to cerebral palsy; I felt that there must be a better way. I bought a cheap digital otoscope (ear camera) from Amazon and saw that the ear-drum moved with my voluntary contractions of the middle ear muscle (the tensor tympani).
I used freely available, and open sourced , security camera software to detect the movement and send a “keystroke” input to already available virtual on-screen keyboards. Within 6-12months I could communicate handsfree by typing on the on-screen keyboard, in a similar way to the late Professor Stephen Hawking, using only an earpiece; The Earswitch.
KL: What are the primary use cases you envision for people using Earswitch?
However, I quickly realised that there was benefit for all; as a handsfree silent and invisible control of all consumer tech from earphones & hearing aids; for example — answering your mobile without putting down your shopping, skipping a track whilst mowing the lawn, switching on your sports head-cam while mountain biking or skiing without stopping — and any other tech you can envisage!
KL: What stage of development is the technology in today? Would this be something that could be added to existing in-ear devices, such as hearing aids and earbuds, or would this be something that would be integrated into the device itself?
Earswitch has UK National Institute for Health Research funding (NIHR) to work with University of Bath to investigate how many people can move this muscle, and help develop the technology, with the hope to move on to formal technology trials for users for the next stage.
I am hoping that ultimately The Earswitch will become an integrated interface in all hearing aids and earphones; this would mean that people with assistive technology or communication needs would gain from being able to buy mass produced , high tech and cheaper consumer off the shelf devices , that have this functionality built in.
Specifically related to hearing aids; I feel that The Earswitch interface could be developed to allow users to intentionally switch the aids to amplify the voice of interest— so rapidly (and handsfree) switching between speakers during conversations in the “cocktail party scenario”. I also believe that the middle ear muscle ( the tensor tympani) is actually a muscle of “auditory attention” and so other more sensitive “Earswitch sensors”, for example based on laser, would be able to detect which sounds and directions that the hearing aid user is trying to listen to, and amplify these.
There are also other spin-out ideas related to new forms of sound transmission for hearing aid users, which may even lead to the possibility of a new form of wireless cochlear implant, but these are very early stage.
KL: This technology really seems like it has potential to be a game-changer for people with ALS and other ‘locked-in’ syndromes, and maybe even more broadly in other ear-level devices. Where can people learn more about the work you’re doing to develop the technology behind Earswitch?