Deep Learning Reinvents the Hearing Aid From: IEEE Spectrum - 12/06/2016 By: Deliang Wang Finally, wearers of hearing aids can pick out a voice in a crowded room Less than 25 percent of people who need a hearing aid actually use one. The greatest frustration among potential users is that a hearing aid cannot distinguish between, for example, a voice and the sound of a passing car if those sounds occur at the same time. The device cranks up the volume on both, creating an incoherent din. It's time we solve this problem. To produce a better experience for hearing aid wearers, my lab at Ohio State University, in Columbus, recently applied machine learning based on deep neural networks to the task of segregating sounds. We have tested multiple versions of a digital filter that not only amplifies sound but can also isolate speech from background noise and automatically adjust the volumes of each separately. Read the entire article at: http://spectrum.ieee.org/consumer-electronics/audiovideo/deep-learning-reinvents-the-hearing-aid Also on this webpage: Hearing Aids through the Years Link: Perception and Neurodynamics Laboratory (PNL) http://web.cse.ohio-state.edu/~dwang/pnl