We Want Captions

Image
Gael Hannan
November 25, 2020

We need to see what we cannot hear.

When people who are deaf, people who have hearing loss, can’t hear the words, we need to see them being made through speechreading. But that only helps for about half the words, because many speech sounds and words occur out of sight – behind the lips, back in the throat, the tongue touching teeth and the upper mouth.

So, we also need to see the words in text form – spelled out and strung together in real-time, as they happen. This is called captioning and it shows the sounds we are hearing, not just speech. Captioning will tell us that birds are singing, footsteps are happening or that eerie music is playing. As best as it can, captioning tells us what other people are hearing, so that we can experience the same reactive emotions that the writers or speakers intend.

Captioning is generated in several ways and we, the people with low or non-existent hearing, don’t particularly care how they are made, just as long as we have access to them.

Closed captioning can be turned on and off by the user, on TV and on programs we watch on our electronic devices. Pre-recorded programs have the captioning added after the filming but before the show is viewed. Live programs, such as the news, are captioned in real time by a captioner who may be working from home a thousand miles away. In meetings and conferences, live captioners, either in the room with us or  phoning in from far away, provide CART (Communication Access Realtime Translation), that enable those of us with hearing loss to participate on a level playing field.

But in just the past few years, the exciting and ground-breaking technology of Automated Speech Recognition (ASR), has made the hearing loss life infinitely richer and more accessible. We don’t have to wait for a live captioner to provide text interpretation. We don’t suffer the disappointment of programs that are not closed-captioned. Now, in our daily lives we can connect thanks to  computer-generated captions in phone conversations, video calls and online virtual meetings. ASR is not perfect yet, but the speech-to-text apps such as those that are bubbling up almost daily, allow me to use my phone as an interpreter when I can’t understand someone from behind their mask, or if a computer video is not captioned.

ASR captions are included in top virtual meeting platforms such as Google Meets and Microsoft Teams.

“You forgot Zoom,” you might say.

No, I did not forget.

Zoom does not offer ASR, thereby cutting out the ability of millions of people who are deaf and hard of hearing to stay fully connected in a call.

Oops, I forgot. Zoom does offer captioning, but only to its top-end paid subscribers.

Zoom has been wonderful in this pandemic, stepping up to handle the gazillion zoom calls that have been our substitutes for face to face, skin to skin connections. Zoom needs to step up once again and include us, the people who can’t hear, by offering free captions. To not do so, is discrimination. Just as no business, government service or city planner would charge a person using a wheelchair to use an elevator or ramp, why in the world should people with deafness be charged to understand?  

“Captions are our ramps,” says hearing health advocate Shari Eberts in a recent interview with National Public Radio (NPR). “Why should we have to pay to use the feature we require for equal access?”

Why indeed. Not only are we advocating for Zoom to step up and do the right thing, which would keep people using Zoom instead of switching to other platforms that do offer ASR, we are asking all forms of media to offer text interpretation, free of charge.  

We are not asking for special consideration. We are asking for the same level of communication access that hearing people have.

We want captions. We need captioning.

 

Note: Join the advocacy for better access. Sign the Zoom captions petition started by Shari Eberts and which now has 59,000 signatories… and counting.

 

 

Leave a Reply