As AI systems gallop into the mainstream, humanity is left staggering somewhere behind, still deciding whether it is riding a horse or being trampled by one. The outcomes are not fixed. But the trajectories are becoming harder to ignore.
We explored some speculative futures within the box—Human-AI symbiosis, Turbulent coexistence, and Dystopian displacement—as thought experiments and cautionary tales. Each story reflects plausible developments rooted in today’s technological, political, and economic fault lines. None are entirely fiction. And none are inevitable.
We also explored an oblivious/congruent scenario (usually to do with either ignored self, other, or contexts, hence out of the box), and it looks like a rusting humanoid robot being used as a coat rack in a village pub, or an architecture illustrating evitability of AI progress.
Scenario: Human-AI symbiosis (in-the-box best case)
Humanity does not disappear. It upgrades.
In this best-case scenario in-the-box, AI becomes a collaborative tool rather than a replacement. Artists co-create with algorithms. Decision-making systems support—rather than override—human judgement. Policies prioritise human well-being, equity, and the right to disconnect. Work becomes more meaningful, not less, as automation offloads drudgery and amplifies human insight.
This future assumes global cooperation on ethical AI development, major public investments in education and reskilling, and robust legal frameworks that prevent data monopolies and algorithmic discrimination. Think the EU’s GDPR meets MIT’s Media Lab, with a dash of open-source utopianism.
Its likelihood is moderately low because it requires sustained global collaboration and democratic governance in a geopolitical moment allergic to both. It is plausible in pockets (the EU, some cooperatives), but systemic global alignment is a long shot unless crises forces unity.
Scenario: Turbulent coexistence (in-the-box likely case)
AI is neither salvation nor doom. It is an erratic roommate with too much control over the thermostat.
In this likely in-the-box scenario, AI integrates into society unevenly. Some sectors flourish—like biotech, logistics, and education—while others crumble or hollow out. Legal frameworks lag behind innovation. Disinformation blooms. Labour markets swing unpredictably between boom and bust. Surveillance becomes normalised but not total. Culture becomes fragmented, attention spans shrivel, and trust erodes.
This is a world of polarised techno-realities: Scandinavian-style digital democracies exist side-by-side with algorithmic autocracies. AI is used both for oppression and liberation, often simultaneously.
Its likelihood is high. This scenario seems to be the messy middle path we are currently hobbling down. Innovation outpacing regulation, platforms consolidating power, and governments reacting more than leading with knowledge, foresight and responses based on those. Expect friction, backlash, and instability—especially as energy demands and ecological impacts mount.
Scenario: Dystopian displacement (in-the-box worst case)
You will be replaced by a chatbot. Please do not resist. It only makes the quarterly reports worse.
This is an in-the-box nightmare scenario: A future where AI is weaponised by monopolies and states to maximise profit and control. Human creativity is marginalised. Gig work becomes the new serfdom. Surveillance capitalism merges with behavioural nudging, and “democracy” is reduced to UX design. AI-generated content floods the media sphere, erasing provenance and truth alike. Education becomes adaptive—and hollow. Reskilling fails. UBI never arrives.
Those without digital literacy or elite connections fall through the cracks. In extreme cases, the social contract itself collapses under the weight of unemployment, inequality, and cognitive overload.
Its likeihood is rising. If current trends in platform monopolisation, weak governance, and climate inertia continue, this is not just plausible—it is probable. It does not take a Skynet. Just apathy, profit incentives, and poor foresight.
Key cross-cutting issues
These three futures differ in tone and trajectory, but they hinge on the same unresolved dilemmas:
Inclusive
Reskilled
Protected
↑
│
│
│
Decentralised │ Centralised
Democratic ◄────────────┼────────────► Technocratic
Human-centred │ Extractive
│
│
│
↓
Disrupted
Polarised
Neglected
Labour markets
No matter the scenario, jobs disappear. In Symbiosis, they evolve. In Coexistence, they fragment. In Displacement, they vanish. The pace of reskilling and the deployment of meaningful safety nets like UBI or community-based economies will define who adapts—and who gets left behind.
Creativity vs. automation
The line between tool and replacement is thinning. In Symbiosis, AI enhances expression. In Turbulence, authenticity battles virality. In Displacement, machine-generated culture floods the zone. Copyright, attribution, and societal appetite for “real” human creativity will shape whether artists thrive or starve.
Democracy
AI can inform public policy or erode it entirely. Think China’s algorithmic control versus the EU’s human-centric model. Governance structures will determine whether we build an informed electorate or an emotionally manipulated user base.
Key variables shaping these scenarios
- Governance: Will AI be guided by democratic values or corporate-state alliances?
- Energy innovation: Can AI evolve without cooking the planet?
- Labour adaptation: Will people be supported to transition or abandoned to fend for themselves?
- Public agency: Will citizens retain the ability—and the right—to refuse harmful uses of AI?
We are not locked into these futures. But the longer we leave these questions to be answered by profit margins and proprietary algorithms, the narrower our range of futures becomes. It is not just a matter of what AI can do. It is what we are willing to let it do—to us, for us, and with us.
Scenario: The Great Pullback (out-of-the-box case)
AI promised to save the world, but forgot to factor in the electricity bill. Now we’re politely, yet firmly, showing it the door.
This is an “enough is enough” scenario – where society collectively raises an eyebrow at AI’s excesses and stages the most British of rebellions: a orderly yet devastating retreat. Having witnessed data centres guzzling power like a student at an all-you-can-drink brunch, job markets collapsing faster than a supermarket shelf in a panic-buying spree, and culture becoming as synthetic as a Wetherspoons carpet, the masses revolt with startling civility.
Its likelihood is medium and rising.
The seeds are visible in educators resisting ChatGPT, artists fighting data scraping, and EU-style AI audits. If energy crises or mass unemployment accelerate, this could become the dominant path—a messy but deliberate unwinding of AI’s worst excesses.
And the irony. The pullback itself is enabled by AI. Decentralized open-source models (like EU’s Mistral) expose corporate monopolies, while activists use AI to track and boycott high-emission data centers.
Just scenarios, these. All of them. Just hathangers for thinking things through. The real question is: which direction are we encouraging, consciously or not? Pick your story. Write another. Do not wait for algorithms to write the ending.