Google’s Pixel Buds wireless earbuds have offered a fantastic real-time translation facility for a while now. Over the past few years, brands such as Timkettle have offered similar earbuds for business customers. However, all these solutions can only handle one audio stream at once for translation.
The folks over at the University of Washington (UW) have developed something truly remarkable in the form of AI-driven headphones that can translate the voice of multiple speakers at once. Think of it as a polyglot in a crowded bar, able to understand the speech of people around him, speaking in different languages, all at once.
The team is referring to their innovation as a Spatial Speech Translation, and it comes to life courtesy of binaural headphones. For the unaware, binaural audio tries to simulate sound effects just the way human ears perceive them naturally. To record them, mics are placed on a dummy head, apart at the same distance as human ears on each side.
The approach is crucial because our ears don’t only hear sound, but they also help us gauge the direction of its origin. The overarching goal is to produce a natural soundstage with a stereo effect that can provide a live concert-like feel. Or, in the modern context, spatial listening.
The work comes courtesy of a team led by Professor Shyam Gollakota, whose prolific repertoire includes apps that can put underwater GPS on smartwatches, turning beetles into photographers, brain implants that can interact with electronics, a mobile app that can hear infection, and more.