Your own voice could be your biggest privacy threat. How can we stop AI technologies exploiting it?
Voices contain countless cues about their owners, and new research suggests that computers might use them to facilitate a range of bad behaviors.
Get the world’s most fascinating discoveries delivered straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Delivered Daily
Daily Newsletter
Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.
Once a week
Life's Little Mysteries
Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.
Once a week
How It Works
Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more
Delivered daily
Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Once a month
Watch This Space
Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.
Once a week
Night Sky This Week
Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
If you know what to listen for, a person's voice can tell you about their education level, emotional state and even profession and finances — more so than you could imagine. Now, scientists posit that technology in the form of voice-to-text recordings can be used in price gouging, unfair profiling, harassment or stalking.
While humans might be attuned to more obvious cues such as fatigue, nervousness, happiness and so on, computers can do the same — but with far more information, and much faster. A new study claims intonation patterns or your choice of words can reveal everything from your personal politics to the presence of health or medical conditions.
The research, published Nov. 19, 2025 in the journal Proceedings of the IEEE, highlights a grave concern for the technology's capability in privacy and unfair profiling.
While voice processing and recognition technology present opportunities, Aalto University's speech and language technology associate professor Tom Bäckström, lead author of the study, sees the potential for serious risks and harms. If a corporation understands your economic situation or needs from your voice, for instance, it opens the door to price gouging, like discriminatory insurance premiums.
And when voices can reveal details like emotional vulnerability, gender and other personal details, cybercriminals or stalkers can identify and track victims across platforms and expose them to extortion or harassment. These are all details we transmit subconsciously when we speak and which we unconsciously respond to before anything else.
Jennalyn Ponraj, Founder of Delaire, a futurist working in human nervous system regulation amid emerging technologies, told Live Science: "Very little attention is paid to the physiology of listening. In a crisis, people don't primarily process language. They respond to tone, cadence, prosody, and breath, often before cognition has a chance to engage."
Watch your tone
While Bäckström told Live Science that the technology isn't in use yet, the seeds have been sown.
Get the world’s most fascinating discoveries delivered straight to your inbox.
"Automatic detection of anger and toxicity in online gaming and call centers is openly talked about. Those are useful and ethically robust objectives," he said. "But the increasing adaptation of speech interfaces towards customers, for example — so the speaking style of the automated response would be similar to the customer's style — tells me more ethically suspect or malevolent objectives are achievable."
He added that although he hasn't heard of anyone caught doing something inappropriate with the technology, he doesn't know whether it's because nobody has, or because we just haven't been looking.
The reason for me talking about it is because I see that many of the machine learning tools for privacy-infringing analysis are already available, and their nefarious use isn't far-fetched.
Tom Bäckström, Aalto University assistant professor
We must also remember that our voices are everywhere. Between every voicemail we leave and every time a customer service line tells us the call is being recorded for training and quality, a digital record of our voices exists in comparable volumes to our digital footprint, comprising posts, purchases and other online activity.
If, or when, a major insurer realizes they can increase profits by selectively pricing cover according to information about us gleaned from our voices using AI, what will stop them?
Bäckström said even talking about this issue might be opening Pandora's Box, making both the public and "adversaries" aware of the new technology. "The reason for me talking about it is because I see that many of the machine learning tools for privacy-infringing analysis are already available, and their nefarious use isn't far-fetched," he said. "If somebody has already caught on, they could have a large head start."
As such, he's emphatic that the public needs to be aware of the potential dangers. If not, then "big corporations and surveillance states have already won," he adds. "That sounds very gloomy but I choose to be hopeful I can do something about it."
Safeguarding your voice
Thankfully, there are potential engineering approaches that can help protect us. The first step is measuring exactly what our voices give away. As Bäckström said in a statement, it's hard to build tools when you don't know what you're protecting.
That idea has led to the creation of the Security And Privacy In Speech Communication Interest Group, which provides an interdisciplinary forum for research and a framework for quantifying information contained in speech.
From there, it's possible to transmit only the information that's strictly necessary for the intended transaction. Imagine the relevant system converting the speech to text for the raw information necessary; either the operator at your provider types the information into their system (without recording the actual call), or your phone converts your words to a text stream for transmission.
As Bäckström said in an interview with Live Science: "The information transmitted to the service would be the smallest amount to fulfill the desired task."
Beyond that, he said, if we get the ethics and guardrails of the technology right, then it shows great promise. "I'm convinced speech interfaces and speech technology can be used in very positive ways. A large part of our research is about developing speech technology that adapts to users so it's more natural to use."
"Privacy becomes a concern because such adaptation means we analyze private information — the language skills — about the users, so it isn't necessarily about removing private information, it's more about what private information is extracted and what it's used for."

Having your privacy violated is an awful feeling — whether it's being hacked or social media pushing online ads that make you think a private conversation wasn't so private. Studies like this, however, show we've barely scatched the surface when it comes to how we can be targeted — especially with something so intimate and personal to us as our own voice.
With AI improving and other technologies becoming far more sophisticated, it highlights the that we don't truly have a grasp on how this will truly affect us — specifically, how technology might be abused by certain forces to exploit us. Although consumer privacy has been massively undermined in the last few decades, there's plenty room left to use what we hold close to us to be commodified, at best, or in the worst cases, weaponized against us.
Privacy in speech technology. (2025, July 1). IEEE Journals & Magazine | IEEE Xplore. https://ieeexplore.ieee.org/document/11261339
Drew is a freelance science and technology journalist with 20 years of experience. After growing up knowing he wanted to change the world, he realized it was easier to write about other people changing it instead. As an expert in science and technology for decades, he’s written everything from reviews of the latest smartphones to deep dives into data centers, cloud computing, security, AI, mixed reality and everything in between.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

