Skip to main content
Advice

Why traditional website search is no longer fit for purpose

  • Richard Chivers

    CEO

26 January 2026

Website search is one of the most heavily used features across large digital estates – and one of the most frustrating to get right.

In sectors like higher education and local government, search performance is taken seriously. Teams invest significant time analysing queries, reviewing zero-result searches, tuning synonyms, adjusting weightings and refining metadata. It’s time-consuming, ongoing work, and for many organisations it’s essential to delivering a usable website.

So when search still falls short, it’s tempting to assume something is being missed. Another tweak. Another rule. Another round of optimisation.

In reality, the problem runs deeper.

User expectations of search have changed faster than the model most websites are built on.

Expectations shaped elsewhere

Over the past decade, people’s expectations have been shaped by experiences outside organisational websites.

Google, Amazon, Netflix, Spotify don’t just retrieve results. They infer intent, personalise responses, and surface answers with very little effort from the user. Behind the scenes, they rely on enormous datasets, sophisticated algorithms, and business models designed to support that level of investment.

Those experiences have set a high bar. One that most public-sector, academic, and enterprise websites were never designed – or funded – to meet.

That gap has widened further as AI has become part of everyday search. With features like Google’s AI Overviews, users are now encountering AI-generated answers as the default, often without consciously choosing to use “AI” at all.

The result is a growing mismatch between what users expect search to do and what traditional, keyword-based site search can realistically deliver.

Search was built for a different kind of user

Traditional site search was built around a set of assumptions that no longer hold true:

  • It assumes users understand how your information is structured.
  • It assumes they can translate their need into the right keywords.
  • It assumes that finding a relevant page is the same as getting a useful answer.

On small, tightly managed sites, those assumptions can still work. But on large estates that have grown organically over time, they become fragile very quickly.

Universities, councils, and similar organisations often manage thousands – or in the case of some of our customers, hundreds of thousands – of pages. Content is created by different teams, at different times, for different audiences. Structures evolve. Priorities change. Content accumulates.

Search is expected to hold all of that together.

Where traditional search still works

It’s important to be clear: traditional keyword search isn’t broken in every context.

Where users know what they’re looking for, where content is well structured, or where the goal is to locate a specific document or form, keyword-based search can be fast, predictable, and effective. For focused sites or clearly defined sections of larger ones, it often remains the right tool for the job.

The problem arises when that same model is expected to act as the primary discovery mechanism across large, diverse, constantly changing digital estates – especially when user behaviour and expectations have moved on.

Why keyword-based search struggles at scale

At its core, traditional site search is driven by keyword matching and ranking rules. Results are ordered based on a mix of metadata quality, recency, popularity, and manual tuning.

That creates a number of structural challenges:

  • Users have to guess the right language
    People don’t think in internal taxonomies. They ask questions in plain, conversational terms that don’t always align with page titles or metadata.
  • Relevance doesn’t always mean usefulness
    A page can rank highly because it contains the right keywords, not because it actually answers the question. Users are left to open multiple results and piece together information themselves.
  • The cognitive load is pushed onto the user
    Large result lists force users to scan, compare, and refine queries. This increases abandonment and repeat searches – clear signals that search isn’t doing its job.
  • Maintaining quality doesn’t scale
    Effective site search requires constant work – managing synonyms, tuning boosts, refining structures and reviewing results. On large estates with many content owners, this quickly becomes unmanageable.

Over time, the quality of search degrades – not because teams aren’t trying, but because the underlying model doesn’t scale with complexity.

This isn’t just a UX issue

It’s tempting to frame search purely as a usability problem. In reality, the implications are much broader.

When search fails, users draw conclusions from incomplete or outdated information. Support teams handle avoidable queries. Accessibility suffers. Trust erodes.

In regulated environments like higher education and local government, this becomes a governance issue as much as a design one. Surfacing the wrong information – or failing to surface the right information – carries real reputational and operational risk.

How this shows up in practice

These limitations are easiest to see in everyday scenarios.

A prospective student asks:

“What are the entry requirements for a mature student with a foundation year?”

The answer spans course pages, admissions policies, and departmental guidance. Keyword search returns a list of links, but it doesn’t assemble a coherent response.

In local government, citizens don’t think in terms of service taxonomies. They ask:

“How do I report a missed bin collection?”

Relevant information is scattered across service pages, policies, and PDFs. Again, the burden is on the user to connect the dots.

In both cases, the issue isn’t a lack of content. It’s the inability of traditional search to interpret intent and bring information together in a meaningful way.

Why simply “adding AI” isn’t the answer

As these gaps become harder to ignore, the temptation is to bolt on the nearest AI solution.

But not all AI-driven approaches are appropriate for organisational websites.

Systems that generate fluent answers without clear grounding introduce new risks: confident but incorrect responses, limited transparency, and little ability to audit or correct outputs. For organisations responsible for publishing accurate, authoritative information, that kind of opacity isn’t acceptable.

A shift in what search is expected to do

Search is no longer just about retrieving pages. It’s about helping people reach reliable answers with minimal effort.

That shift has consequences:

  • Discovery moves from navigation to interpretation.
  • Trust becomes as important as speed.
  • Content quality and governance matter more than algorithm tuning.

Traditional site search isn’t disappearing. In many cases, it remains useful and appropriate. But its role is changing.

For large, diverse digital estates, keyword-based search increasingly works best as a supporting capability – not the primary way people expect to find information.

That expectation has moved on. And search needs to move with it.

  • Richard Chivers

    CEO

Advice
26 January 2026

Related blog posts