In 1991, The Tech at MIT published an interview with Joseph Weizenbaum, the computer scientist best known for creating ELIZA and later becoming one of the field’s sharpest internal critics. Speaking with Diana ben‑Aaron, he dissected the role of computers in education, their entanglement with the military, and the ethical evasions of scientists.

Three decades later, his words are less a time capsule and more a mirror — the issues he named have not only persisted but mutated into modern forms, from AI hype cycles to tech‑military partnerships dressed up in start‑up chic. This post is a “then/now” rendering of that interview: his points in their original spirit, and how they look in the world of 2025.


Computers in education

Then:

When asked “What’s the role of computers in education?”, Weizenbaum basically replied: “Why are you asking about computers before you know what education is for?” He saw the question as upside‑down. The job of schools, he argued, is first to teach mastery of language so students can think and communicate clearly; second, to ground them in their cultural heritage through history, literature, and the arts; and third, to prepare them for life in a scientifically complex world with mathematics and basic observational skills.

In 1991, he thought American schools were failing these priorities — especially in language. The reasons weren’t mysterious: poverty, underqualified teachers, overcrowded classes, and a national budget skewed toward the military. Computers in classrooms, he warned, were a comfortable distraction, allowing politicians to claim reform while avoiding these structural problems.

Now:

We have gone from “a few computers in classrooms” to full‑scale dependency on Chromebooks, iPads, online testing platforms, and now AI tutors. Literacy gaps remain, teacher shortages deepen, and budgets for tech swell while libraries and arts programmes are cut. Digital literacy crises — inability to discern fact from fiction, online disinformation — pile on top of the old “Johnny can’t read” problem. Weizenbaum’s fear that technology would be used as a fig leaf instead of real reform looks less like a warning now and more like the business plan.


Ethics and education

Then:

For Weizenbaum, ethics starts with the ability to think clearly, and that starts with mastery of one’s own language. Without it, moral reasoning collapses into slogans. History, literature, and cultural study give students the context to understand and evaluate ethical challenges. Schools that fail to teach language fail to prepare their students for ethical life.

But his critique did not stop there. Even if computers could demonstrably improve reading scores, he insisted the question “Why can’t Johnny read?” must still be asked — and answered. Those answers are uncomfortable: Johnny might be hungry, might live in a violent environment where reading is irrelevant to survival, might be in an overcrowded class with an underqualified teacher. Asking why means confronting poverty, inequality, and misplaced national priorities — such as funding the military while cutting school meal programmes. It is much easier to flood classrooms with shiny devices and declare progress than to face the ethics of ignoring these root causes.

Now:

“Ethics” is now a conference buzzword — with AI ethics toolkits, diversity pledges, and digital citizenship curricula — but public ethical discourse is often conducted at meme‑level depth. The ability to construct a precise moral argument is not widespread, and the ethics “modules” in AI systems are mostly PR shields. Meanwhile, the underlying social realities Weizenbaum pointed to still exist — hunger, poverty, and systemic neglect — and are still being papered over with technology. If the public cannot debate right and wrong with clarity, and if policymakers prefer gadgets over justice, the ethical veneer in our tech won’t stop much of anything.


Computers as a conservative force

Then:

Contrary to the popular myth of “digital disruption,” Weizenbaum saw computers as fundamentally conservative — tools that preserve existing institutions. His banking example was telling: without computers, banks might have had to decentralise or reorganise to cope with more transactions. With computers, they kept the same centralised model and simply processed more. He admitted that when he helped design the first computerised banking system, he didn’t think about the social consequences — he was absorbed in the technical challenge and professional pride.

Now:

The same pattern is everywhere. “Digital transformation” often means making the same broken system run faster and cheaper without changing who it serves. Algorithmic hiring reproduces old biases. Predictive policing reinforces discriminatory patterns. Gig‑economy scheduling software squeezes more labour from workers with less security. The rhetoric is revolutionary; the reality is institutional inertia.


Military roots and dominance

Then:

Computers were “born to the military”: the first U.S. computer calculated ballistic trajectories, British machines broke wartime codes, and Karl Zuse’s work in Germany aided aircraft design. Since WWII, most computer R&D has been funded by the military. Modern weapons systems depend on them, and advances in AI will make “smart” weapons deadlier. Weizenbaum dismissed the euphemism “defence” — computers are involved in killing people, full stop.

Now:

The tech–military relationship is tighter than ever. AI‑powered targeting, autonomous drones, and battlefield analytics are sold with the same UX polish as consumer apps. The vocabulary has expanded — “lethality enhancement” sounds cleaner than “better at killing” — but the substance is the same. If anything, the pipeline from university research to military deployment is shorter and smoother than it was in Weizenbaum’s day.


Rationalisations scientists use

Then:

Weizenbaum catalogued three common excuses for working on harmful technology. First: “If I don’t do it, someone else will” — a blanket moral waiver. Second: “Technology is neutral” — the comforting fantasy that tools have no inherent tendency toward certain uses. Third: “Spin‑off benefits” — justifying weapons work by the consumer products it might accidentally produce. He called these out as false in the real world, where military applications are predictable given the political and economic system. His analogy: asking doctors to develop bioweapons because they might yield new medicines. Most would refuse — and engineers should too.

Now:

The excuses persist, just re‑skinned for AI. “If we don’t build it, China will.” “The model is neutral; only the data is biased.” “Yes, it’s for the military, but look — now we can generate cat videos!” The pattern is identical: ethical responsibility outsourced to a hypothetical future and wrapped in spin.


Fears for the future

Then:

Weizenbaum’s greatest fear was that the younger generation would never grow old, annihilated by the technologies their societies cultivated — particularly military ones. This was not a vague doomsday scenario; it was grounded in the real presence of nuclear weapons on hair‑trigger alert.

Now:

The fear has diversified. The threat of sudden nuclear annihilation still exists, but it now sits alongside climate collapse, synthetic pandemics, destabilising cyber‑attacks, and autonomous weapons escalation. The common thread is the same: capabilities racing ahead of political maturity. The menu of potential endings is longer; the appetite for slowing down hasn’t grown.


Reflection in 2025

Looking across these “nows,” the pattern is painfully consistent: technology is still deployed as a shortcut past hard social problems rather than as a tool to confront them. In education, it replaces investment in teachers and literacy with subscriptions and screens. In ethics, it is used to signal virtue while ignoring the structural realities that make justice difficult. In industry, it preserves the power structures it was supposed to disrupt. In the military, it accelerates our ability to kill faster than our willingness to ask why we are killing. And in the lab, the same rationalisations still absolve people of responsibility, just dressed up in new jargon.

The uncomfortable truth in 2025 is that Weizenbaum’s warnings were not just prophetic — they have been normalised. What was once cautionary has become background noise. We still chase technological capability without matching it with political maturity, and we still congratulate ourselves for progress that leaves the roots of the problem untouched. The real test is whether we are willing to reverse the order of the questions — to decide what kind of society we want before deciding which machines to build. If we do not, then Weizenbaum’s greatest fear for the future may not just come true — it may arrive looking exactly like the world we have chosen.