- Sound Awareness
- Posts
- Hear What Matters — The Future of Sound with AI-Boosted Earbuds
Hear What Matters — The Future of Sound with AI-Boosted Earbuds
What it means to have selective hearing
Smart sound filters. An AI-generated image with Dall-E
What if you had a personal sound engineer, fine-tuning the world’s noise, right in your ear?
Unlike traditional noise-canceling tech, this one lets you choose what you want to hear and what you don’t.
Sound is all around us but we can’t shut it down as we do our eyes.
Want to avoid seeing something?
Just close your eyes. It works like a charm for avoiding eye contact with your ex at a party.
Want to avoid hearing something?
Too bad, keep smiling and nodding along to your partner’s story.
So what if we could program our acoustic environment?
Real-time filter of environmental sounds
Researchers from the University of Washington have upgraded our traditional earbuds to a new level.
With a mini-brain in them (i.e. neural network), they’ve trained the earbuds to recognize different types of sounds, like speech, car honks, or birds chirping, and filter them out whenever we choose to.
These earbuds have become smart enough to pick out the sounds we want to hear from the noise we don’t.
And it happens in real-time!
They capture all sounds around us and quickly process them to isolate the sounds we want to hear. The rest is dialed down or muted. If it didn’t happen that fast, you would get confused between what you see and what you hear. Just imagine seeing someone talk but only hearing their voice 3 seconds later, weird no?
In a way, we’re getting a specific volume knob for each sound in our environment.
Here’s how it sounds.
For now, researchers have been able to isolate 20 different sound classes and filter them in milliseconds. But they don’t just isolate sound. They also keep the spatial feel of the environment. So, if a car is honking to your left, you’ll still hear it from the left, but maybe just as a faint background noise.
However, the system still has a hard time distinguishing sounds that resemble each other, like vocal music and human speech.
When the physical properties are so similar, the best way to improve the tech is by feeding the system with more real-world data which will make it more accurate in filtering out the undesired sounds.
There’s still a long way to go.
What does this mean for the future?
The scientists conducted real-world tests, having people walk around with these earbuds in various settings, such as streets or parks. They discovered that the system could accurately pick out desired sounds while keeping the listener aware of their surroundings.
It’s not just a lab-proven device.
And from here on, there are at least 3 ways this tech could redefine different industries.
Hearing aids
While the traditional ones help people hear more than they used to, they don’t discriminate between sounds. They amplify everything. With this tech, people can curate the sounds they need. For instance, if you walk by a busy street, your hearing aid would only amplify the important stuff (e.g. friend’s voice, car horn).
With this tech, people would get so much more clarity in a world of noise.
Virtual meetings
Imagine being at work and being able to block out everything but your colleague’s voice. Those dog barking, kids screaming…all gone.
This tech can filter out the irrelevant noises, making sure you catch every critical detail. Meeting time could be reduced and communication improved.
Here’s to a mute button for life’s distractions!
Movie and video game experiences
While immersing yourself in entertainment at home, you could mute the sound of your neighbor’s lawn mower or those annoying kid noises down the street.
You could focus on the footsteps of your virtual enemy while drowning out the mundane sounds of the real world. Or while watching a movie, you could hear the dialogue crystal clear over the sound of your popcorn munching.
This would be an immersive audio experience like no other.
But watch out!
The lines between reality and the digital world might be blurred too much.
Final thoughts
We’ve all wanted at some point to have more control over our sound environment.
This tech is a way to make it happen.
It’s not just some fancy earbuds.
It’s a way to reshape our relationship with sound all around us.
But there’s a dark side to it as well.
Somebody could take it to an extreme and isolate themselves from every bit of unwanted noise or be forced to do so.
Remember that episode from Black Mirror called “White Christmas”? This society had a tech known as “Z-Eyes” that allows users to block certain individuals from their vision and hearing. When someone is blocked, they appear as a static-filled silhouette, and their voice becomes an unintelligible mumble.
Could we be witnessing the early stages of such a reality?
I hope not but it makes you wonder.
And then there’s one more issue, atrophy.
Muscles that are not used waste away.
Think about the time when you or someone you know had a limb immobilized, like an arm cast or a leg splint. Once the doctor removed it, did you notice how awkward it felt to use that limb again?
Similarly, depriving people of the ability to distinguish between sounds could atrophy their hearing, no? Like any other muscle, if it’s not trained, it will decay.
Many questions to ponder, but for now let’s marvel at the blend of AI and acoustics. It could make our lives a lot more pleasant.
Sound interactions won’t be the same after this, would you agree?