'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet

Asian woman using mobile phone smartphone laying on the bed in the bedroom. Sleepy exhausted, can not sleep. Insomnia, addiction concept. Women scrolling social networks on mobile dark bedroom.
AI agents could be the future of a very different internet. (Image credit: wombatzaa/Getty Images)

The rise of artificial intelligence (AI) has permeated our lives in ways that go beyond virtual assistants like Apple’s Siri and Amazon’s Alexa. Generative AI is not only disrupting how digital content is created but it's starting to influence how the internet serves us.

Greater access to large language models (LLMs) and AI tools has further fueled the dead internet conspiracy theory. This idea, posited in the early 2020s, suggested that the internet is actually dominated by AIs talking to and producing content for other AIs — with human-made and disseminated information a rarity.

When Live Science explored the theory, we concluded that this phenomenon has yet to emerge in the real world. But people now increasingly intermingle with bots — and one can never assume an online interaction is with another human.

Beyond this, low-quality content — ranging from articles and images, to videos and social media posts created by tools like Sora, ChatGPT and others — is leading to a rise in "AI slop." It can range from Instagram Reels showing videos of cats playing instruments or using weapons, to fake or fictional information being presented as news or fact. This has been fueled, in part, by a desire for more online content to drive clicks, draw attention to websites and raise their visibility in search engines.

"The challenge is that a combination of the drive towards search engine optimization [SEO] and playing to social media algorithms has led towards more content and less quality content. Content that's placed to leverage our attention economy (serving ads, etc.) has become the primary way information is served up," Adam Nemeroff, assistant provost for Innovations in Learning, Teaching, and Technology at Quinnipiac University in Connecticut, told Live Science. "AI slop and other AI-generated content is often filling those spaces now."

Two female hands holding their smartphones, connecting with social media, leaving comments, sending messages and sharing photos. Technology connecting people.

Social media platforms like Instagram may often host poor-quality AI-generated content. (Image credit: Oscar Wong/Getty Images)

Mistrust of information on the internet is nothing new, with many false claims made by people with particular agendas, or simply a desire to cause disruption or outrage. But AI tools have accelerated the speed at which machine-generated information, images or data can spread.

SEO firm Graphite found in November 2024 that the number of AI-generated articles being published had surpassed the number of human-written articles. Although 86% of articles ranking in Google Search were still written by people, versus 14% by AI (with a similar split found in the information a chatbot served up), it still points to a rise in AI-made content. Citing a report that one in 10 of the fastest-growing YouTube channels shows AI-generated content only, Nemeroff added that AI slop is starting to negatively affect us.

"AI slop is actively displacing creators who make their livelihood from online content," he explained. "Publications like Clarkesworld magazine had to stop taking submissions entirely due to the flood of AI-generated writing, and even Wikipedia is dealing with AI-generated content that strains its community moderation system, putting a key information resource at risk."

While an increase in AI content gives people more to consume, it also erodes trust in information, especially as generative AI gets better at serving up images and videos that look real or information that seems human-made. As such, there could be a situation where a deeper mistrust in information, especially in media brands and news, leads to human-made content being seen as fake and AI-made.

"I always recommend assuming content is AI-generated and looking for evidence that it's not. It's also a great moment to pay for the media we expect and to support creators and outlets that have clear editorial and creative guidelines," said Nemeroff.

Trust versus the attention economy

There are two sides to AI-generated content when it comes to the lens of trust.

The first is AI spreading convincing information that requires an element of savvy thinking to check and not take at face value. But the open nature of the web means it’s always been easy for incorrect information to spread, whether accidentally or intentionally, and there’s long been a need to have a healthy scepticism or desire to cross-reference information before jumping to conclusions.

"Information literacy has always been core to the experience of using the web, and it's all the more important and nuanced now with the introduction of AI content and other misinformation," said Nemeroff.

The other side of AI-generated content is when it's deliberately used to suck in attention, even if its viewers can easily tell it’s fabricated. One example, as flagged by Nemeroff, is of images of a displaced child with a puppy in the aftermath of Hurricane Helene, which was used to spread political misinformation.

Although the images were quickly flagged as AI-made, they still provoked reactions, therefore fueling their impact. Even obviously AI-made content can be either weaponized for political motivations or used to capture the precious attention of people on the open web or within social media platforms.

"AI content that is brighter, louder and more engaging than reality, and which sucks in human attention like a vortex … creates a "Siren" effect where AI companions or entertainment feeds are more seductive than messy, friction-filled, and sometimes disappointing human interactions." Nell Watson, an IEEE member and AI ethics engineer at Singularity University, told Live Science.

A hand touching a wave of lights. Conceptual image of AI.

There are fears that the AIs of the future will be fueled by synthetic content generated by other AIs, leading to an overall detachment from reality. (Image credit: Weiquan Lin/Getty Images)

While some AI content might look slick and engaging, it might represent a net negative for the way we use the internet, forcing us to question if what’s being viewed is real, and to deal with a flood of cheap, synthetic content.

"AI slop is the digital equivalent of plastic pollution in the ocean. It clogs the ecosystem, making it harder to navigate and degrading the experience for everyone. The immediate effect is authenticity fatigue," Watson explained. "Trust is fast becoming the most expensive currency online."

There’s a flipside to this. The rise of inauthentic content could be counterbalanced by people being drawn to content that’s explicitly human-made; we could see better-verified information and "artisanal" content created by real people. Whether that’s delivered by some form of watermark or locked off behind paywalls and in gated communities on Discord or other forums, has yet to be seen. It's down to how people react to AI slop, and their growing awareness of such content, that will determine the shape of content in the future and how it ultimately affects people, Nemeroff said.

"If people find slop and communicate that slop isn't acceptable, people's consumer behaviors will also change with that," he said. "This, combined with our broader media diet, will hopefully lead people to make changes to the nutrition of what they consume and how they approach it."

Less surfing, more sifting the web

AI-made content is only one part of how AI is changing the way that we use the internet. LLM-based agents already come built into the latest smartphones, for example. You'd also be hard-pressed to find anyone who hasn’t indirectly experienced generative AI, whether it was serving up information suggestions or offering the option to rework an email, generating an emoji or automatically editing a photo.

While Live Science’s publisher has strict rules on AI use (it certainly can't be used for writing or editing articles), some AI tools can help with mundane image-editing tasks, such as putting images on new backgrounds.

AI use, in other words, is inescapable in 2025. Depending on how we use it, it can influence how we communicate and socialize online — but more pertinently, it’s affecting how we seek and absorb information.

Google Search, for example, now has an AI overview serving up aggregated and disseminated information before external search results — something which a recently introduced AI Mode builds upon.

“We primarily used the internet via web addresses and search up to this moment. AI is the first innovation to disrupt that part of the cycle," Nemeroff adds. "AI chat tools are increasingly taking up internet queries that previously directed people to websites. Search engines that once handled questions and answers are now sharing that space with search-enabled chatbots and, more recently, AI agent browsers like Comet, Atlas, Dia, and others.”

On a surface level, this is changing the way people search and consume information. Even if someone types a query into a traditional search bar, it’s increasingly common that an AI-made summary will pop up rather than a list of websites from trusted sources.

Online memorial concept with R.I.P. abbreviation button on computer keyboard

(Image credit: Pop Paul-Catalin/Shutterstock)

"We are transitioning from an internet designed for human eyeballs to an internet designed for AI agents," Watson said. "There is a shift toward "Agentic workflows." Soon, you generally won't surf the web to book a flight or research a product yourself; your personal AI agent will negotiate with travel sites or summarize reviews for you. The web becomes a database for machines rather than a library for people."

There are two likely effects of this. The first is less human traffic to websites like Live Science, as AI agents scrape the information they feel a user wants — disrupting the advertising-led funding model of many websites.

"If an AI reads the website for you, you don't see the ads, which forces publishers to put up paywalls or block AI scrapers entirely, further fracturing the information ecosystem," said Watson. This fracturing could even see websites shutting down, given the already turbulent state of online media, further leading to a reduction in trusted sources of information.

The second is a situation where AI agents end up searching, ingesting and learning from AI-generated content.

"As the web fills with synthetic content — AI slop — future models train on that synthetic data, leading to a degradation of quality and a detachment from reality," Watson said. Slop or solid information, this all plays into the dead internet theory of machines interacting with other machines, rather than humans.

"Socially, this risks isolating us," Watson added. "If an AI companion is always available, always agrees with you, and never has a bad day, real human relationships feel exhausting by comparison. Information-seeking will shift from ’Googling’ — which relies on the user to filter truth from fiction — to relying on trusted AI curators. However, this centralises power; we are handing our critical thinking over to the algorithms that summarise the world for us."

It’s the end of the internet as we know it… and AI feels fine

Undoubtedly, the ways in which humans are using the internet, and the World Wide Web it supports, have been changed by AI. AI has affected every aspect of internet use in 2025, from how we search for information, to how content is generated and how we are served the information we asked for. Even if you choose to search the web without any AI tools, the information you see could have been produced or handled by some form of AI.

As we’re currently in the midst of this change, it’s hard to be clear on what exactly the internet will look like as the trend continues. When asked about whether AI could turn the internet into a "ghost town," Watson countered: "It won’t be so much a ghost town as a zombie apocalypse."

It’s hard not to be concerned by this damning assessment, whether you're a content creator directly affected by AI or simply an end user who’s getting tired of questioning information.

However, Nemeroff highlighted that we can learn from the rise of social media and its impact on the internet in the late 2000s. It serves as an example of the disruption and challenges that such platforms faced when it comes to the use and spread of information.

"Taking a few pages out of what we learned about social media, these technologies were not without harms, and we also did not anticipate a number of the issues that emerged at the beginning," he said. "There is a role for responsible regulation as part of that, which requires lawmakers to have an interest in regulating these tools and knowing how to regulate in an ongoing way."

When it comes to any new technology — self-driving cars being one example — regulation and lawmaking are often several steps behind the breakthroughs and adoption.

It’s also worth keeping in mind that while AI poses a challenge, the agentic tools it offers can also better surface information that might otherwise remain buried deep in search results or online archives — thereby helping uncover information from sources that might not have thrived in the age of SEO.

The way humans react to AI content on the internet will likely govern how it evolves, potentially bursting an AI bubble by retreating to human-only enclaves on the web or requiring a higher level of trust signals from both human- and AI-made content.

"We find ourselves in a really challenging moment with this," concluded Nemeroff. "Being familiar with the environment and knowing its presence there is a key point to both changing the incentives around this as well as communicating what we value to the platforms that distribute it. I think we will start to see more examples of showing the provenance of higher quality content and people investing in that."

Roland Moore-Colyer

Roland Moore-Colyer is a freelance writer for Live Science and managing editor at consumer tech publication TechRadar, running the Mobile Computing vertical. At TechRadar, one of the U.K. and U.S.’ largest consumer technology websites, he focuses on smartphones and tablets. But beyond that, he taps into more than a decade of writing experience to bring people stories that cover electric vehicles (EVs), the evolution and practical use of artificial intelligence (AI), mixed reality products and use cases, and the evolution of computing both on a macro level and from a consumer angle.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.