AI systems analysis / long read

We’re Still Judging Artificial Intelligence by Its Worst Versions

Public perception of AI remains shaped by early disappointments, even as the technology has fundamentally evolved and become embedded in everyday systems.

Ai-Si.uk AI systems analysis Published 16 April 2026

We use it every day, often without realising it. So why are we still judging AI by yesterday’s failures?

By any reasonable measure, artificial intelligence has advanced at extraordinary speed. Yet public perception has not kept pace. For many people, the phrase ‘AI’ still conjures images of faltering voice assistants, misheard commands, robotic replies, and the familiar frustration of repeating oneself to a machine that never quite understood.

It is here, in this gap between memory and reality, that much of today’s scepticism towards AI takes root.

The Long Shadow of Early Disappointment

The first wave of consumer AI arrived not with quiet competence, but with great promise and equally visible limitations. Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana were presented as intelligent assistants. In practice, they often behaved more like glorified command menus.

They could set timers, check the weather, or answer simple factual queries. Beyond that, they frequently failed. Context was lost. Language had to be carefully structured. Anything slightly ambiguous resulted in confusion.

For millions of users, this was their first, and in many cases only, direct experience of AI. The result was not awe, but disappointment.

That disappointment has proven remarkably durable.

A Category Mistake

What has changed since then is not merely incremental improvement, but a shift in the underlying technology. Modern AI systems are not simply better versions of those early assistants. They are fundamentally different in capability.

Where voice assistants relied on narrow command structures, newer systems can interpret open-ended language, generate responses, and maintain context across a conversation. They are not flawless, but they operate on a different level of flexibility and usefulness.

Yet much of the public discussion continues to treat all AI as if it belongs to the same category.

This is a mistake. It is akin to judging today’s smartphones by the standards of early mobile devices, or dismissing modern navigation systems because early sat navs were clumsy and unreliable.

The Persistence of First Impressions

Human beings are not quick to update their mental models. First impressions, particularly negative ones, tend to stick. Psychologists have long observed that people weigh bad experiences more heavily than good ones, a bias that is amplified when those experiences are widely shared.

Early AI failures were not only common, they were visible, often amusing, and widely circulated. They became part of the cultural narrative around AI.

By contrast, the steady improvements that followed have been quieter. They lack the same viral appeal. A system that works well does not generate headlines in the same way as one that fails spectacularly.

As a result, public perception lags behind reality.

Why Incremental Change Isn’t Enough

There is another factor at play. Gradual improvement rarely shifts opinion. A slightly more accurate voice assistant does not fundamentally alter the user experience. It remains recognisably the same product, with the same limitations.

What changes perception is discontinuity — a clear sense that something is different in kind, not just degree.

We have seen this before. Early versions of Apple Maps were widely criticised, yet over time the product improved significantly. However, it took more than incremental updates to change public opinion. It required a broader shift in reliability and trust.

AI faces a similar challenge. Until users encounter a system that clearly breaks from their past experience, many will continue to assume that little has changed.

Between Hype and Reality

None of this is to suggest that scepticism towards AI is entirely misplaced. Concerns about accuracy, bias, and misuse remain valid. Nor is it true that all modern systems live up to their promise.

But it is equally misleading to judge the present solely through the lens of past shortcomings.

AI today is neither the miracle its most enthusiastic advocates claim, nor the shallow gimmick its critics often assume. It is something more complicated: a rapidly evolving set of tools whose capabilities, limitations, and impacts are still unfolding.

It is also important to recognise that not everyone views AI through the same lens. Many people are already adapting to it, often without realising it. AI is increasingly embedded in everyday systems, from search engines and recommendation algorithms to navigation, translation, and fraud detection.

As a result, it is entirely possible for someone to claim they “do not use AI” while relying on it repeatedly throughout the day. This is not dishonesty, but a misunderstanding of how deeply integrated these systems have become behind the scenes.

The Quiet Inevitability of Adoption

Unlike earlier technologies that required conscious adoption, AI is often introduced indirectly. It arrives through services people already use, rather than as a standalone product they choose to engage with.

This creates a subtle but powerful dynamic. Adoption is not always a decision; it is a process that unfolds through infrastructure.

The speed of development only accelerates this effect. As capabilities improve and integration deepens, AI becomes less visible as a distinct tool and more like an underlying layer of modern life.

In this sense, the trajectory appears less optional than it might seem. Whether welcomed or resisted, AI is becoming a structural part of how digital systems operate.

Updating the Conversation

If public debate about AI is to become more grounded, it must begin by recognising this shift. The conversation cannot remain anchored to the frustrations of a previous generation of technology.

That does not mean abandoning caution. It means applying it to the systems that exist now, rather than those that existed a decade ago.

The risk, otherwise, is not only that we misunderstand AI, but that we fail to engage with it seriously — either dismissing its potential or overreacting to its perceived dangers.

In the end, the question is not whether AI has changed. It has. The question is whether our understanding of it has kept up.

At present, the answer appears to be no.

The debate, then, is no longer about whether people will use artificial intelligence, but whether they recognise that they already do.