B21F6BB4-1C5E-4341-9402-72115F03EA26 21. August 2025

Website traffic plummets: How AI search engines are changing the internet

In this (almost) final instalment of our ‘Humans & AI’ series, we take a look at the big picture. What will happen to our digital ecosystems when AI becomes an authority on orientation rather than just a tool?

Traffic is falling, but rankings are still good — so why aren't users coming? What sounds like a technical error is the new reality of the internet. AI search engines and LLMs are changing everything.

AI search eats up website traffic

Good to know

Crawler sind automatisierte Programme, die das Internet durchsuchen, um Webseiteninhalte systematisch zu erfassen und auszuwerten. Sie werden von Suchmaschinen eingesetzt, um Seiten für den Index vorzubereiten, aber auch von KI-Systemen, um Inhalte für das Training oder zur Beantwortung von Anfragen zu analysieren. Webseitenbetreiber können steuern, welche Bereiche einer Seite von Crawlern gelesen werden dürfen – etwa über die Dateien robots.txt oder llms.txt.

It starts inconspicuously. In the analytics dashboard stand the same familiar displays: green and red bars, percentage values, trends compared to last week. Rankings remain stable, indexing status looks clean – yet traffic is plummeting. No algorithm update, no penalty. The answer appears directly on the results page, often formulated by AI. What seems like convenience is changing the web at its core.

As in Fassbinder’s „Fear Eats the Soul“, a new system is quietly eating through the surface: Artificial Intelligence is stripping websites of their visibility – slowly, silently, seamlessly. The transition from link lists to answer boxes can be measured. Gartner expects a 25% decline in traditional search volume by 2026. Already today, 58.5% of Google searches in the US end without clicking on an external site – a phenomenon known as Zero-Click Rate.

User behavior on result screens is also changing noticeably. When Google displays an AI overview, users click significantly less on external links. The Pew Research Center documents a click rate of 8% for AI overviews compared to 15% for traditional search results pages – only about 1% of clicks come directly from the overview.

These numbers seem abstract until they reach everyday reality. Users today formulate complete problems instead of short keywords. They expect a recommendation, not just a results list. The click loses meaning because the answer is already there.

When Clicks Disappear – Economic Consequences

For publishers and platforms, this is no side issue. Chegg reported significant drops in non-subscriber traffic and revenue after the introduction of AI overviews and took legal action.

Reddit and Stack Overflow are experiencing similar transformations. Reddit faces the loss of significant Google traffic after the introduction of KI overviews, while Stack Overflow recorded a dramatic decline in user questions – the platform that was the heart of the developer community for years watches as its content disappears into AI answers without users visiting the source.

In the news sector, editorial teams describe an abrupt decline in visibility as soon as answers dominate search results. A British study reports drops of up to 80% in individual topic clusters.

More Depth Instead of More Clicks – A New Funnel Emerges

At the same time, new patterns emerge. Visitors who come to a site via large language model answers convert more frequently because they appear pre-qualified. An analysis speaks of a 4.4-fold increase in conversion probability.

However, this new pattern changes the entire customer journey. The classic funnel with its touchpoints – Awareness, Consideration, Decision – shrinks. For brands, this means: visibility must occur not only in search results but directly in AI answer texts.

The Invisible Readers: AI Crawlers on the Web

For content to appear in answers from ChatGPT, Perplexity, or Claude, automated programs – so-called crawlers – must read the pages and systems must process them. 2024 and 2025 see a new crawler wave rolling through the web: GPTBot, CCBot, PerplexityBot, and other specialized agents. Many operators first notice this in their log files.

The handling remains controversial. Wikipedia, Reddit, or Stack Overflow rely on fee models and programming interfaces with access licenses to limit uncontrolled training.

Infrastructure providers are also responding. Cloudflare introduced the concept of an „AI Labyrinth“ in 2025: bots end up in endless, generated paths instead of copying productive content.
 

How AI Crawlers See the Web – And What They Miss

Many AI crawlers don’t understand interactive website elements created in the JavaScript programming language. Content that only appears client-side remains invisible. Single-page applications without server-side rendering, infinitely scrolling pages, and dynamic catalogs remain problematic.

For companies, this means: visibility in AI answers begins with cleanly delivered HTML. Server-side rendering or creating static websites lay the foundation, structured data supplements the semantics.

 

The New Power Structure

Good to know

SERPs (Search Engine Results Pages): The result pages of search engines where relevant websites, ads, and other content are displayed after a search query. Modern SERPs often also contain direct answers, images, and other elements.

Server-Side Rendering (SSR): A technique for website creation where HTML content is fully generated on the server before being sent to the browser. This makes content immediately readable for crawlers and AI systems.

Previously, publishers optimized for keywords; today, relevance in conversational context matters. Which source an answer cites is decided by models based on authority, clarity, and structure – and business agreements. Some publishers negotiate licenses, others rely on technical barriers. Both show: visibility is being renegotiated.

This creates new strategic tensions. For many websites, this means: visibility is no longer primarily a question of keywords but of prompt relevance. The context of a question can determine whether a source is mentioned or whether its content disappears into an “authorless“ AI answer.

User Comfort and Its Blind Spots

For people, the change seems pleasant: less tab switching, faster orientation. But comfort comes at the cost of transparency. Only a small portion actively searches for original sources within AI boxes. The rest trust the summarized answer – often losing connection to journalistic work, methodological details, or original data.

Where Is the Web Heading?

Three paths are emerging:

  • Consolidation – few large providers dominate, licenses regulate data flow.
  • Open Ecosystem – decentralized models and open corpora keep diversity alive.
  • Hybrid Models – preferred sources receive prominent links in answers.

In all cases, what matters is whether the value of journalistic and professional work remains visible.

Build alternative traffic sources

Outlook and Perspective

Infrastructure providers are seeking new equilibriums. Cloudflare speaks openly about pay-per-request models and warns of an erosion of the old exchange “content for traffic“.

For companies, this doesn’t result in a simple checklist but a dual mandate: deliver technically reliable content and appear as a citable authority. Those who work this way remain relevant even when the click is only the last step, not the first.

The shift from search engine to answer engine is not a technological side effect but a paradigm shift. It changes how users find information, how companies build reach – and how content circulates on the web.

The winners of this transformation have three things in common: direct customer relationships, unique content, and technical AI readiness. The tools exist, the strategies are proven. Those who act now can emerge stronger from the change. Because the question is no longer whether the internet is changing – but whether you will help shape that change or be overwhelmed by it.