AI has a serious problem, but it’s probably not what you think. The usual complaints are valid—hallucinations, resource demands, logistical nightmares—all real challenges AI must overcome. But beneath these tangible, technical headaches lurks a thornier, more human issue: our collective lack of imagination.
We’ve grown lazy from a steady diet of incremental improvements. It’s not that incremental innovation is inherently bad, but it has quietly poisoned our expectations. We've lost sight of what technological advancement can truly mean. To put it bluntly: people point out LLM's lack of imagination, forgetting it reflects our own.
Think different?
Consider Apple. Once the rebel, it now has the corporate stench of inevitability. Apple didn’t invent smartphones or touchscreens, laptops or smartwatches. But Apple did something perhaps even more important: it made technology feel revolutionary. Computers were dull office tools until Apple made them objects of desire and creativity. Phones went from fringe geek gadgets to essential lifestyle accessories. Apple didn’t just refine products; it redefined our relationship to them. Apple made the future feel inevitable.
Yet those electrifying days seem firmly in the past. Apple still makes excellent products—products we reliably queue up for—but each iteration is now about optimisation rather than imagination. “Here’s to the crazy ones” increasingly feels like a eulogy. The recent Apple Vision Pro headset exemplifies this trend perfectly. There was hope, even quiet expectation, that Apple would finally make VR mainstream. But the magic is gone. Vision Pro didn’t spark a revolution—it landed with a polite golf clap. It’s not that the tech isn’t impressive; it’s that we no longer expect anything transformative. We hope for refinements because we are jaded against leaps. We don’t want wild, risky ideas. We want slightly better rectangles packaged as dents in the universe.
We blame Apple, but I think we also need to blame ourselves. Culturally, we’ve stopped believing the future can be radically different. We expect refinement, not revolution. We want innovation—but only in pre-approved, easily digestible forms. There's a prevailing sense of apathy, a belief that major problems are mostly solved, leaving us to tinker at the edges. We’ve convinced ourselves we’re near the pinnacle, that radical leaps are no longer possible or even desirable. It’s about wider cultural stagnation—a collective fatigue where we don’t dream big because we don’t believe anyone else will either.
Electric cars illustrate this perfectly. Common arguments against electric vehicles focus on engineering complexities and infrastructure barriers. And yes, these obstacles are enormous—but they're equally true for our existing car-centric world. Imagine if cars didn't yet exist, and you were proposing them. You’d be laughed out of the room. "Thousands of miles of roads? Fuel infrastructure? Coordinated traffic laws? Accidents, insurance, emissions? Public backlash?" It would never get off the ground—not because the technology wasn't possible, but because there would be no belief in it. Yet we live comfortably with exactly these complexities today because we suspended disbelief at the right moment.
Try this mental exercise with almost anything around you—streetlights, plumbing, coffee shops—and suddenly our world appears full of improbable wonders. It reveals how fragile our acceptance of "normal" really is. Most everyday conveniences faced countless valid objections before becoming standard. Our lives are improbable, even magical, precisely because people dared to believe.
Mental Models Matter
AI represents a generational paradigm shift. Step back for a moment from the micro-nuances and technical implementations. Modern LLMs are our first real foray into truly generalised computing. Although deterministic at their core, the scale they've reached has hit an inflection point, giving rise to emergent behaviours that feel like a whole new way of interfacing with technology.
We’ve long been used to computers delivering superhuman scale—speed, storage, repetition. But until now, that scale has always been bound to expected outputs. You can write an algorithm that generates an infinite number of numbers, but you’ll still only ever get a number. If you fed that same algorithm the ingredients of a Big Mac, you’d still get a number. It wouldn’t—couldn’t—shift context.
This is the crux of what should make LLMs so important: their uncharted territory in how this new relationship with algorithms affects how, what, and why we utilise them. It doesn’t really matter whether they “know” how many letters are in strawberry, if they can draw convincing hands, or write objectively good poetry. What matters is that you can ask all those things—and the output you get is at least relevant to the input. It’s not just what LLMs can do. It’s that they respond in context—across contexts—through the same interface. That’s not an incremental upgrade; it’s a complete shift in the underpinnings of how we understand and use technology.
We’re so accustomed to incremental change that when something truly different comes along, we instinctively try to make it familiar. We cram it into existing workflows, then act surprised when it doesn’t fit. Our biggest failure with AI so far isn’t technical; it’s in how we choose to wield it. It’s a frantic gold rush to slap AI labels on old ideas. Rather than exploring how AI might fundamentally change our approach to problems, we're content to swap out old tech for new AI-infused versions. We're rebuilding yesterday’s monuments with tomorrow’s tools, blind to how fundamentally different the landscape could be. Perhaps the greatest tragedy is that the courage needed to dream at the scale of LLMs remains largely untapped by us, the consumers.
This isn't new—Silicon Valley has long favoured hammers desperately seeking nails. Uber and Airbnb, once disruptive innovators, quickly became stale templates: “Uber, but for groceries! Airbnb, but for cars!”. Now everything is 'powered by AI', shoehorning in the same functionality but with AI rather thank asking the bold questions how how this technology might fundamentally change the core approaches to things. and in our apathy we have been distracted by this, we’ve become so good at focusing on what is directly happening in front of us and pointing out why things won’t work that we’ve lost the will to ask: “What if it did?” or "how could it"? We cant demand better because we simply cant be bothered to imagine what that would look like. We’ve become comfortable in our stagnant relationships with technology, much like settling in a disappointing yet familiar romance. Tech giants return home with last-minute petrol station flowers—another incremental update, another minor tweak—knowing it buys them just a little more patience.
Dream bigger
To genuinely appreciate AI's potential, we must reshape our mental models, challenge our assumptions, and step into the thrilling unknown. Progress often demands unreasonable belief—and we’ve grown allergic to that. We must awaken from our slumber and reignite imagination as our essential infrastructure. Cautious pragmatism is valuable, but it can’t replace the wild ambition that brought us here in the first place.
Instead of fitting AI into our existing flow we need to start asking what it changes about it. What preoconceptions does it challenge? If you were to start from scratch and tackle the space from the ground up, how could this change in mental model effect your approach? Do you even need the software at all anymore? ask the difficult questions, the answers might feel alien or uncomfortable, but they will lead to a much more impactful, meaningful and significant result.
AI invites us to reimagine what computers—and society—could become. To seize this opportunity, we must first rekindle our capacity to dream. Otherwise, we'll remain comfortably tweaking the familiar, forever missing the adventure waiting just beyond our horizon.