Scientists use AI to encrypt secret messages that are invisible to cybersecurity systems
Scientists say that hiding secret messages using AI chatbots could lead to a world of iron-clad encryption.
Get the world’s most fascinating discoveries delivered straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Delivered Daily
Daily Newsletter
Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.
Once a week
Life's Little Mysteries
Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.
Once a week
How It Works
Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more
Delivered daily
Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Once a month
Watch This Space
Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.
Once a week
Night Sky This Week
Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Scientists have found a way to turn ChatGPT and other AI chatbots into carriers of encrypted messages that are invisible to cybersecurity systems.
The new technique — which seamlessly places ciphers inside human-like fake messages — offers an alternative method for secure communication “in scenarios where conventional encryption mechanisms are easily detected or restricted,” according to a statement from the researchers who devised it.
The breakthrough functions as a digital version of invisible ink, with the true message only visible to those who have a password or a private key. It was designed to address the proliferation of hacks and backdoors into encrypted communications systems.
But as the researchers highlight, the new encryption framework has as much power to do bad as it does good. They published their findings April 11 to the preprint database arXiv, so it has not yet been peer-reviewed.
"This research is very exciting but like every technical framework, the ethics come into the picture about the (mis)use of the system which we need to check where the framework can be applied," study coauthor Mayank Raikwar, a researcher of networks and distributed systems at the University of Oslo in Norway, told Live Science in an email.
To build their new encryption technique, the researchers created a system called EmbedderLLM, which uses an algorithm to insert secret messages into specific areas of AI-generated text, like treasure laid along a path. The system makes the AI-generated text appear to be created by a human and the researchers say it's undetectable by existing decryption methods. The recipient of the message then uses another algorithm that acts as a treasure map to reveal where the letters are hidden, revealing the message.
Get the world’s most fascinating discoveries delivered straight to your inbox.
Users can send messages made by EmbedderLLM through any texting platform — from video game chat platforms to WhatsApp and everything in between.
"The idea of using LLMs for cryptography is technically feasible, but it depends heavily on the type of cryptography," Yumin Xia, chief technology officer at Galxe, a blockchain company that uses established cryptography methods, told Live Science in an email. "While much will depend on the details, this is certainly very possible based on the types of cryptography currently available."
The method’s biggest security fault comes at the beginning of a message: the exchange of a secure password to encode and decode future messages. The system can work using symmetric LLM cryptography (requiring the sender and receiver to have a unique secret code) and public-key LLM cryptography (where only the receiver has a private key).
Once this key is exchanged, EmbedderLLM uses cryptography that is secure from any pre- or post-quantum decryption, making the encryption method long-lasting and resilient against future advances in quantum computing and powerful decryption systems, the researchers wrote in the study.
The researchers envision journalists and citizens using this technology to circumvent the speech restrictions imposed by repressive regimes.
"We need to find the important applications of the framework," Raikwar said. "For citizens under oppression it provides a safer way to communicate critical information without detection."
It will also enable journalists and activists to communicate discreetly in regions with aggressive surveillance of the press, he added.
Yet despite the impressive advance, experts say that implementation of LLM cryptography in the wild remains a way off.
"While some countries have implemented certain restrictions, the framework’s long-term relevance will ultimately depend on real-world demand and adoption," Xia said. "Right now, the paper is an interesting experiment for a hypothetical use case."
Lisa D Sparks is a freelance journalist for Live Science and an experienced editor and marketing professional with a background in journalism, content marketing, strategic development, project management, and process automation. She specializes in artificial intelligence (AI), robotics and electric vehicles (EVs) and battery technology, while she also holds expertise in the trends including semiconductors and data centers.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

