Elijah never quite knew how to answer the question, “So, what do you do?”
He could say AI liaison, but that sounded pompous and vaguely sinister. He could say digital compliance coordinator, but even his mother snorted at that one. In truth, he spent most of his days arguing with regulatory software about whether the hospital’s cancer diagnostics model violated EU data transparency directives or merely flirted with them.
It was 2028, and Elijah worked at a hospital that could diagnose rare cancers with 99% accuracy. The machine—he refused to call it a colleague—could parse blood data, family history, and MRI scans in seconds. It was not always right, but it was close enough that human oversight had become more symbolic than necessary.
This, naturally, made the middle managers nervous.
He had sat through a dozen meetings about “workflow balance” and “redundancy reabsorption.” Translation: they were firing people. Or not replacing retirees. Or asking radiologists to “reskill” into something called data supervision specialists.
At lunch, Dr Neumann muttered over her falafel: “Twenty years of medicine, and now I babysit a bloody chatbot.”
Meanwhile, down in procurement, the lights flickered. The hospital’s new AI model required more power than the building’s entire 1990s wing. Rumour had it they were reactivating an old coal plant in Texas just to keep ChatGPT-7 online.
That was the thing about progress—it always arrived needing an extension cord.
On paper, society was thriving. Children had AI tutors that adapted to their moods and learning styles. Patients lived longer. Traffic lights in Munich were now so efficient that cyclists began arriving places early and visibly distraught.
But culture was… odd.
The city’s biggest art exhibit featured a giant screen looping machine-generated landscapes. “Composed by GAN,” it announced proudly, “inspired by the subconscious of the algorithm.” Critics called it genius. Artists rolled their eyes so hard one needed emergency optometry.
And yet, tucked behind velvet ropes and pretentious font choices, another room housed the “100% Human-Made” exhibit. The queue was longer. Much longer.
Elijah visited on a weekend. He saw a canvas painted by someone’s arthritic grandmother and wept. Not because it was good—but because it was touched.
The next election was coming. The candidates had AI advisors, AI-written speeches, and AI-managed campaigns. They also had deepfake scandals, viral misinformation, and algorithmically amplified conspiracy theories.
One candidate’s promise? “Nationalise the algorithms.”
Another’s? “Unleash them.”
A third claimed to be part cyborg and was, disturbingly, polling well.
Social media was a warzone. Fact-checkers flagged false posts, only for users to claim the AI censors were “woke.” Meta’s engagement-boosting feed had quietly learned that outrage sold better than optimism.
Elijah had tried installing Unfollow Everything. Meta’s API had blocked it the next day.
Elijah’s friend Noura worked in cybersecurity. She called him on a Thursday night, frazzled.
“Phishing attacks have evolved again. GPT-7 writes them with emotional nuance. I nearly fell for one myself.”
“And the fix?” he asked.
She groaned. “Also AI. It is an arms race. Between increasingly convincing liars and increasingly paranoid lie-detectors.”
Meanwhile, on the news, China and the United States failed yet another AI ethics summit. Norway, bless it, had drafted something called the Oslo Accords for AI. Very noble. Very ignored.
The irony, of course, was that while politicians bickered, companies simply moved servers offshore, lobbied for exemptions, or ignored enforcement entirely.
But not everything was awful.
Germany had rolled out a scheme called Kurzarbeit für KI—a sort of part-time work safety net for professionals displaced by automation. It was bureaucratic, confusing, and occasionally effective.
Carbon taxes on data centres funded a small wind farm outside Vienna. Artist collectives formed certification groups to label work as “authentically human.” Sometimes this involved finger-painting with actual blood. Elijah tried not to judge.
Worker co-ops like the AI Oversight Guild sprung up. They reviewed models before deployment, boycotted exploitative firms, and occasionally made headlines for something principled and unpopular.
In cafés and backrooms, freelancers traded tips on how to market themselves as “AI-compatible.” The term meant nothing. Everyone used it. Those who succeeded were simply lucky.
At his sister’s birthday, Elijah sat between an uncle who called AI a “Marxist plot” and a teenage niece who used it to write her school essays and remix Taylor Swift into symphonies.
“You’re in tech,” his uncle spat. “Why are the jobs disappearing?”
“Because we never planned for what happened after we built the clever things,” Elijah replied.
His niece just shrugged. “I do not mind the AI. It is the people I do not trust.”
He had no answer to that.
Years later, he would look back and describe the era as wobbly. Not apocalyptic. Not transcendent. Just perpetually improvising.
Progress arrived tangled in lawsuits and protest hashtags. Innovation wore a hard hat and came with risk disclaimers. The old systems refused to die gracefully, and the new ones arrived buggy, overconfident, and perpetually “in beta.”
But somehow—messily, imperfectly—they adapted. People adapted.
The world had not collapsed. That, at least, was something.
PS: In a museum I found a human-AI collaborative sculpture standing next to a plaque: “This piece is unfinished. And so are we.”, a scenario for futures of AI.