In one corner, we have Demis Hassabis, CEO of DeepMind, telling The Guardian in 2024 that AI will be “10 times bigger than the Industrial Revolution.” Disruption is inevitable, he says, but humanity will adapt, as it always does. The challenge is simply to manage the turbulence.
In the other corner, we have Joseph Weizenbaum, reflecting in the 1980s on his earlier work designing a banking system for processing physical cheques. It was an intricate, technically satisfying project. Only years later did it occur to him that no one had asked whether automating cheque processing at scale was socially desirable, or what knock-on effects it might have. “It never occurred to me to ask,” he admitted.
Hassabis starts from the premise that AI should happen — the question is how fast. Weizenbaum learned the hard way that asking “Should this happen?” is the question most professionals never even think to put on the table.
The seductive pull of the technical challenge
If you have ever been in a team of engineers, developers, or scientists faced with a thorny technical problem, you know the look: that glint in the eye when the constraints are tight, the system is complex, and the solution will require ingenuity. The work becomes a puzzle, a game. You get your dopamine hits from solving sub-problems, optimising processes, and making something elegant work.
Dopamine is not a metaphor here — it is a biochemical bribe. Each small breakthrough, each clever workaround, each time you finally get the blasted thing to compile without errors, there is a little reward in the brain. The more complex the problem, the more hits you get. Before long, you are not working to solve the right problem; you are working to keep that reward cycle going.
Weizenbaum’s cheque-processing system was exactly this kind of challenge. In hindsight, he realised that neither he nor his colleagues asked if the change would benefit the banking system’s customers or workers, or whether automating at that scale would alter economic relationships in undesirable ways. The joy of solving the problem eclipsed the question of whether it was worth solving. And once the technical fun begins, very few professionals are willing to be the one to call a halt.
The inevitability narrative
Hassabis’s framing is a perfect example of the “inevitability” narrative. It says: the tech is coming, and it will be massive. Your only choices are how fast you adopt it, how you regulate it, and how you adapt to it. This mindset subtly removes an entire category of debate — whether we should build a technology at all, or whether we should deliberately limit its capabilities.
Once you accept inevitability, “problem-first” thinking becomes irrelevant. The machine is already in motion; your job is to avoid getting crushed by it.
Institutional and cultural barriers
The truth is, most organisations do not reward asking “Do we need this?” They reward delivery, novelty, and efficiency. Shipping a feature is rewarded; stopping a project because it is not the right problem is often career-limiting.
Competition intensifies this. In technology, there is always a lurking fear: “If we do not build it, someone else will.” The result is a race to ship first, with no one willing to pump the brakes to re-examine the map.
And if you do ask “What problem are we solving? Who asked for this?”, the answer often comes from the inside of the building — from marketing decks and strategic roadmaps — not from the people who will live with the consequences.
The gap between capability and need
We have seen entire waves of technology that were more about capability than need. Blockchain-for-everything in the mid‑2010s brought us endless proofs of concept for problems that were already well solved with simpler tools.
Contrast that with targeted medical devices developed in consultation with the communities they serve — devices designed not because they were “cool” but because there was a clear gap in access or outcomes. These cases are rare, but they show that problem-first thinking works when it is baked in from the start.
Weizenbaum’s warning and its modern relevance
After ELIZA, his 1966 natural language program, Weizenbaum became a prominent critic of automating human judgment. He warned that the “ELIZA effect” — people attributing more understanding to machines than they actually have — was dangerous, particularly in decision-making domains like law, medicine, and governance.
He argued that not every solvable problem should be solved by a machine, and that the first step in responsible design is to ask what a technology will displace, distort, or destroy — before you build it.
Shifting the professional mindset
If we want more professionals to start with “what is needed,” we must redesign the environment in which they work. A lone engineer pausing to ask an uncomfortable question will always be overruled by deadlines, deliverables, and the ever-present hum of competition. The incentives have to change.
That starts by putting ethicists and domain experts into design teams from day one, not as an afterthought. Their role is not to veto ideas but to make sure the problem is understood in its social, legal, and human context before a single prototype is built.
It also means recognising problem-framing as a professional skill, worthy of the same respect and reward as problem-solving. The ability to articulate why a project exists, whose needs it serves, and what its unintended consequences might be is just as valuable as the ability to make the code compile or the hardware run.
And it requires introducing formal checkpoints where teams must define the problem clearly before funding or green-lighting the work. These are not bureaucratic hurdles; they are the moments when the question “Is this the right problem to solve?” has to be answered in daylight, not buried under the glow of the build pipeline.
This is not about slowing innovation for the sake of it. It is about ensuring that the speed we prize so highly is aimed in a direction that benefits more than the people holding the patents. Because if we cannot even slow down long enough to ask what is worth building, we are not innovators at all — we are just very efficient lemmings.
The choice we keep missing
The story we keep telling ourselves — that technological revolutions are inevitable — is a comforting lie for those building them. It removes the awkward responsibility of choice. It suggests that engineers and executives are not decision-makers at all, but merely passengers on a runaway train, bravely trying to adjust the lighting in the carriages.
In reality, AI, like every technology before it, is the product of deliberate human choices, shaped by funding, incentives, and political will. “Inevitable” is simply the word we use when enough people with enough power have already decided it will happen.
The real leap forward is not in how quickly we can build, but in how deliberately we choose what to build. Hassabis may be right that the coming change is bigger than the Industrial Revolution. The question is whether, thirty years from now, today’s builders will be looking back with Weizenbaum’s regret — or with the rarer satisfaction of having asked the hard questions first, and chosen their projects like responsible adults rather than overexcited puzzle-solvers.