Let’s talk about the time the United Nations—guardian of international human rights, global peacekeeper, moral compass for the post-war world—shared biometric data of Rohingya refugees with the government they were fleeing from. Yes, you read that right.
Without informed consent.
The very people who fled genocidal violence in Myanmar, who put their trust in the UN for protection, were quietly catalogued and handed back—data-first—under the noble banner of “registration”.
Cue the press statement: “Statement on refugee registration and data collection in Bangladesh.” It reads like a lesson in passive voice and bureaucratic shoulder-shrugging. There’s talk of “ensuring safeguards” and “technical protocols” and “cooperation with the host government”. What’s not mentioned: how collecting biometric data without proper, informed consent, then sharing it with the regime accused of ethnic cleansing, might be—how do we put this gently—a catastrophic breach of trust and human rights.
Damage control by way of data science
In the wake of the revelation, the internet explodes. Articles bloom like mould. Think pieces. Expert panels. Hastily convened webinars. LinkedIn posts from data scientists wringing their hands while simultaneously polishing their CVs.
The collective mood? “We must do better. We need ethical frameworks. We need responsible AI. We need transparent systems for consent.”
A charming effort, really. Everyone rushing to solve the symptoms, to redesign the Band-Aid, while the gaping wound festers quietly in the corner.
The promises fly in:
- “We’ll improve our protocols!”
- “We’ll consult communities next time!”
- “We’ll draft another 70-page ethical guideline nobody reads!”
- “We’ve created a consent checkbox—translated this time!”
But let’s not kid ourselves. This isn’t about a missing consent form or a faulty translation. It’s about a system that incentivises data extraction, institutional compliance, and glossy impact reports over the actual safety of human beings.
The system is fine, apparently
Despite the public relations firestorm, the system remains serenely intact. The aid-industrial complex rolls on, well-lubricated by grants, targets, KPIs, and the deep, soothing belief that good intentions cover all sins.
Accountability? Rarely more than a footnote.
Structural change? That would require asking unpleasant questions about the real power dynamics in humanitarian work—who decides, who benefits, who’s rendered voiceless. Can’t have that.
Instead, we get more workshops, more framework white papers, more advisory boards full of people who went to the same conferences and agreed to be “deeply concerned”.
And yes—thank you, Michael—for actually pointing out that the system itself isn’t being addressed. You’d think that would be the starting point, not a radical position.
If it is not about power, It is PR
Let’s be brutally honest. The problem here isn’t a rogue data scientist or a sloppy consent form. It’s the fact that humanitarian institutions operate on a model where data is power, consent is conditional, and accountability is optional—particularly if the victims are stateless, displaced, or otherwise voiceless.
And when called out, the reflex is not to restructure—but to rebrand.
No, a better consent form won’t save the next Rohingya family whose biometric profile ends up in the wrong hands. No, another UN statement won’t stop a government from using “humanitarian data” to justify surveillance or persecution.
If you’re serious about fixing the problem, stop papering over it with toolkits and pledges. Start with this: people are not data points. Refugees are not datasets. And you don’t get to experiment with consent when the stakes are people’s lives.