EngSci 1T5s recognized by CNIB for wearable tech that assists the visually impaired

The Canadian National Institute for the Blind (CNIB) has honoured a team of U of T Engineering alumni for designing a wearable device that helps blind people read.

On September 12, Ruben Larsen, Fraser Le Ber, Chang Liu and Rustom Shareef (all EngSci 1T5) accepted the CNIB’s E. (Ben) & Mary Hochhausen Access Technology Research Award for C-HEAR, a hands-free, headphones-style device that combines optical character recognition (OCR) with text-to-speech (TTS) technology.


C-HEAR enables a user to take a photo of a page of text at a comfortable reading distance and then listen to the text played back with minimal delay. The team developed the technology last year in their fourth-year Engineering Science capstone design course.

“We feel really honoured,” said Liu. “The award was presented to us at the CNIB’s headquarters, and it wasn’t until we arrived that we realized the magnitude and impact that our little capstone school project may have in the future. It really brought a lot of meaning to the work that we did. It really put things into perspective for us.”

“I want to congratulate the team for choosing such an interesting and impactful project and for achieving such a wonderful result,” said Mark Kortschot (EngSci), professor and division chair. “The mission of Engineering Science at the University of Toronto is to help our bright young students acquire the knowledge and skills needed to make a significant difference in the world, and this award shows that they are doing just that.”

C-HEAR’s “brain,” according to Liu, is a single-board $30 Raspberry Pi. The team built all of their codes and input/output interaction for the device on the credit card-sized computer. The device features play/pause and rewind functionality and a rechargeable battery with up to five hours of life.

Liu and Shareef focused on turning the picture into a text file while Larson and Le Ber dedicated their time to converting the text to a sound file. All four collaborated on the integration of the technologies.

The few products currently available on the market that combine OTR/TTS capabilities — G.P. Imports’ Image to Speech mobile application, for example — fall short of what Liu feels those with vision loss require.

“They are all smartphone-based apps,” he said. “We felt that a smartphone may not always be available to someone who is visually impaired. The other problem with those products is that they weren’t properly developed or maintained.”