In an effort to enable the deaf and hard of hearing to participate in the in mobile technology revolution, Washington University and Cornell University developed MobileASL, a program that preferentially compresses video data so clips of sign language can travel across slow wireless networks without becoming unintelligibly low resolution. After four years of development, the program has finally entered the field testing phase.
This summer, eleven participants conducted about 200 real time sign language phone calls. Each call only needed 30 kilobytes per second of bandwidth to transmit American sign language messages easily understood by both parties.
"We know these phones work in a lab setting, but conditions are different in people’s everyday lives," said Eve Riskin, the project leader and a professor of electrical engineering at Washington University. "The field study is an important step toward putting this technology into practice."
Cellphone users in Asia and Europe have used their cell phones for sign language conversations for years, but they have the benefit of much faster wireless networks than is presently available in America. The U.S. still suffers from significant gaps in 3G network coverage, and some carriers add extra charges for, or block outright, the bandwidth gobbling video exchanges needed for sign language exchanges.
By using MobileASL, people with hearing problems can communicate in areas with slow cellphone connections without fear of additional charges or blocked calls. In fact, MobileASL calls use 10-times less bandwidth than an iPhone FaceTime video conference.
While some deaf and hard of hearing people have made due just with texting, people involved in the field test have reported that texting simply doesn't replicate the experience of having a real conversation.
"Texting is for short things, like 'I'm here,' or, 'What do you need at the grocery store?'" said Josiah Cheslik, one of the field testers.
"This is like making a real phone call."