Pau was late, again, thanks to the AI-run tram that insisted on pausing for precisely 12.4 seconds at each station “for optimal urban harmony.”
“Urban harmony my arse,” he muttered, stepping out into Barcelona’s midday sun, which was now neatly moderated by micro-reflective paint and smart algae rooftops. Somewhere, a city-wide AI had just nudged the temperature down a degree using a predictive cloud-seeding protocol. It was not magic. It just felt that way.
He arrived at the Digital Democracy Hall just in time to join the citizen jury’s latest review: the deployment of a new language model in Catalan public schools. Not an upgrade to GPT-7, mind you. This was a locally trained model called Flor. Designed, built, and trained in Catalunya. Built by people who knew the region, the language, the kids, and the parents—not just their metadata.
Pau tapped in his vote on the jury’s decentralised panel: yes, but with open-source oversight and a six-month ethics audit.
He was not anti-AI. He just preferred it to be house-trained.
Later, he wandered through the plaza to meet Noor, his partner, who was busy installing solar sensors on an AI-coordinated food garden. Once, these had been scrappy community projects. Now, with the help of open-source planning AIs and a very chatty local bot named Germinet, they produced 40% of the neighbourhood’s food.
The water use was down. The carbon footprint? Negligible. And no one had had to fill out a spreadsheet in three years.
“Germinet says your tomatoes are emotionally resilient,” Noor smirked.
“Tell Germinet I aspire to be the same.”
Symbiosis, people called it now. Pau remembered when that word was reserved for TED Talks and marketing fluff. But something had shifted around 2029—after the third global protest wave, the second deepfake war scare, and the quiet collapse of two megaplatforms under regulatory siege.
The European Union passed the AI Act with Teeth. The OECD coordinated a global AI Carbon Index. And towns across the world—from Oslo to Dhaka—began designing their own local models for weather, education, translation, and transport.
AI decentralised. Not because Big Tech wanted it to, but because people stopped asking for permission.
At home, Pau queued up a new musical collaboration by Holly Herndon + Voiceform Collective. It was part-human, part-AI, part-haunting. He liked that. There were still “pure” human-only artists—now with premium pricing and little holographic “Certified Human Touch” badges. But most people had made peace with collaboration.
Noor was teaching a course on AI literacy at the local cooperative. Not just prompt engineering nonsense, but the real stuff: What is a training set? What rights do you have over your data? How do you audit an algorithm that decides your mortgage?
The children were frighteningly fluent in these things. “Code-switching” now included toggling between human speech and vector embeddings.
Not everything was fixed, of course. Some governments still flirted with surveillance AI. Others pretended fusion was just around the Pauner (it was not). But carbon capture models were making gains, and traffic AI had genuinely reduced emissions in dozens of cities.
Even the United Nations had surprised everyone by launching the AI Peace Dividend Fund—redistributing tax revenue from high-emission models to fund education and clean water in under-resourced regions. It had not ended war. But it had made starting one more awkward.
Palantir, of all companies, had repurposed its predictive analytics to flag humanitarian crises two weeks before they hit the headlines. Pau still did not trust them. But trust, these days, was layered.
Critics remained. Some worried that decentralisation slowed innovation. Others argued it kept power in the hands of those with infrastructure.
They were not wrong. The debates were messy, democratic, and blessedly public. One could still shout down an AI system in a town hall and expect a policy revision by the following quarter. It was not fast, but it was fair-ish.
Companies had adapted too. The smart ones leaned into co-design, transparency, and what Pau called “nutritional labelling for models.” They disclosed energy use, training sources, and potential biases—like calories for code.
The dumb ones? They were still being sued.
Pau stood on the roof that evening, watching the city below shimmer in blues and greens. Drones zipped by, not as spies or couriers, but as part of the environmental health network—scanning air quality and checking pollen density.
He remembered the panic years. The job cuts. The misinformation wars. The long, aching disillusionment with “progress.” And yet here they were. Not in paradise. Not even close.
But surviving. Evolving. Learning to work with machines that did not understand love, but could sort recycling properly and write haikus about cheese.
That counted for something.
PS: “The machines are clever. But we are still in charge of wonder.”, a scenario for futures of AI.