Stanford startup TranscribeGlass seeks to bring ease and affordability to assistive technology

Feb. 6, 2023, 12:10 a.m.

Meet TranscribeGlass, an affordable AR device that attaches to your glasses and, paired with your transcription software of choice, projects real-time captions in front of your eyes. 

The device comes from CEO and co-founder Madhav Lavakare, Yale ’25, and co-founder Tom Pritsky M.S. ’23. Both students have close ties to the world of assistive tech and independently pursued the idea before joining forces in 2021. 

The company recently began manufacturing its first 150 preorders, with hopes to finish shipping them in the next few months. The TranscribeGlass Beta is being sold for $55, with the final version expected to land around $95. 

Pritsky — founder of Stanford’s club for the Deaf and Hard of Hearing — has had bilateral hearing loss since the age of three and uses hearing aids and lip reading to communicate. He has long imagined the possibility of a heads-up captioning device.

“I really like captions for movies,” Pritsky said. “I thought it would be fantastic to have them for real life.” 

TranscribeGlass allows users to choose an external captioning service such as caption files from a movie theater, an automatic speech recognition service or live human captioning. The captions are then sent via Bluetooth to the hardware, which projects them in the user’s field of view using augmented reality. The user can then adjust the size and location of the text to best fit their environment.

Lavakare was inspired to create the device as a high school student after learning that a Deaf friend had dropped out due to accessibility issues. “It’s 2017,” Lavakare said. “Why isn’t there something that can help my friend participate in conversations in a mainstream setting?”

Hearing aids are prohibitively expensive for many; according to Lavakare, “The majority of people who can use hearing aids simply don’t use them due to a myriad of factors, one of which is cost; with the lower range starting around three thousand dollars.” Cochlear implants are even more expensive and require invasive surgery. 

In addition, hearing aids and cochlear implants don’t work for everyone, and some Deaf and Hard of Hearing people choose to opt out. Those who do use them often cannot understand all speech, especially in settings with lots of background noise. 

“If you put me in a bar, I have a really hard time,” Pritsky said. “There are conversations where I primarily nod along; my understanding drops to 20% at best.” 

On the other hand, traditional captioning services such as Otter.ai and Google’s Live Transcribe are more affordable, but they require the user to look away from the speaker to read from a phone or computer screen. 

With the existence of both augmented reality (AR) technology and live captioning technology, Madhav thought to combine the two, using an AR display to allow the user to “look at the speakers, look at their non-verbal communication cues and see the captions in their field of vision.” 

Thus, Lavakare began constructing the first of many prototypes, “hacking together electronics, CAD models, software, writing all the code.” Despite initial reservations about early prototypes, Lavakare has since been encouraged by the positive reception from an increasingly large group of users from India’s National Association of the Deaf, Deaf schools and community meetups.

As the project gained traction, Lavakare began raising funding, recruiting mentors and assembling a small team of employees. It was difficult at first. 

“I was an 18-year-old kid who didn’t know what he was doing and didn’t have a plan for college. Investors would sometimes just laugh me out of the room.” 

According to Lavakare, assistive technology is “not a sexy space,” not attracting the same funding as more “exciting” fields like AI. Lavakare also said it’s “imperative” for an investor to be the right fit for the company, sharing in their vision of access and affordability. 

In 2020, the company was picked up by the Indian Institute of Technology in Delhi, an incubator for startups, and received another grant from the US and Indian governments. 

Pritsky came on board as co-founder in 2021, serendipitously meeting Lavakare through a mutual friend. According to Lavakare, running the business on his own had become “nearly impossible” as a full-time college freshman, making Pritsky a much-needed addition to the team.   

“It’s very powerful to have the perspective of a user as the founder of a company,” Lavakare added. 

Feedback from users has been a crucial part of their design process from the beginning. Lavakare says that over the course of their five prototype iterations, the company has had at least 300 people test their product.

Pritsky piloted the product with one of his friends who has more severe hearing loss. “Usually he has to look at me, so we can’t walk side by side,” Pritsky said. “We were walking to a restaurant, and we continually had to stop and talk; after, we walked back with the device on. He could follow along with the conversation walking side by side, and that was pretty incredible.” 

“We have never had an exchange like that in the many years that I’ve known him,” Pritsky said.

“The real test of these devices are determined by the individuals from whom they are intended,” David Jaffe, adjunct lecturer of ENGR110: Perspectives in Assistive Technology, wrote to The Daily. Jaffe said that the product’s “battery life, accuracy” and “freedom from breakdown” are essential, as well as “how they might be provided to low-income users.”

Cathy Haas, a lecturer in the Stanford Language Center who teaches American Sign Language (ASL), got the chance to trial the device when she ran into Pritsky at Stanford’s Coffee House.

“I was a little bit skeptical at first, but then I put it on, and I fell in love with it,” Haas said in an interview conducted in American Sign Language with an interpreter present. 

As a Deaf person, she sees many situations where the device could be helpful.

“In a medical situation, I’ve had huge problems. Sometimes they’ll call my name, and I won’t hear anything, obviously. If I had this device, I’d be able to respond immediately,” Haas said.

She also pointed out use cases including watching movies, receiving emergency warnings and traveling. For Haas, who cannot speak, the device isn’t appropriate for all situations.

“If I’m having a one-on-one conversation with somebody, it may be less useful because I wouldn’t be able to respond,” Haas said. “If you both can talk, then that’s fine, but there’s a big variety in the community.”

In these cases, Haas prefers interpreters, who can communicate in her first language, ASL, and convey the speaker’s tone through their body language and facial expressions. For a conversational setting, she said, the device may be better equipped for other members of the community.

Nonetheless, Haas has excitedly shared the product with her colleagues and awaits its upcoming launch. “I hope I can get one,” Haas said.

Cameron Duran '24 is a vol. 265 Arts & Life Managing Editor. Contact The Daily’s Arts & Life section at arts ‘at’ stanforddaily.com.

Login or create an account