Ten years ago, Nonprofit AF , in Weaponized data: How the obsession with data has been hurting marginalized communities, warned that nonprofits’ data obsession could dehumanise and harm marginalised communities—reducing lived experiences to reductive metrics, ignoring power dynamics, and prioritising funder dashboards. Now it’s 2025. Technology, AI, data regulation, and global politics have transformed—but many threats have only intensified. What changed?


What has improved

Since GDPR (2018), Europe and other jurisdictions have enacted data protections that at least nominally strengthen consent and individual’s rights. Data-sharing rules and transparency mandates force nonprofits to be somewhat more accountable.

Ethics discourse and accountability

Discourse on data justice, algorithmic accountability, and community-led evaluation has gained traction. Groups like Algorithmic Justice League and Data for Black Lives have pushed back publicly against biased AI systems.

Indigenous data sovereignty

The global Indigenous Data Sovereignty (IDSov) movement has accelerated. The 2025 Global Indigenous Data Sovereignty Conference - ALIGN in Canberra united over 240 Indigenous delegates to define CARE principles (Collective benefit; Authority to control; Responsibility; Ethics)—moving from rhetoric into action.

UNESCO’s December 2023 Report and guidelines for indigenous data sovereignty in artificial intelligence developments laid out participatory AI guidelines centring Indigenous priorities in Latin America.

Emerging data governance structures

European initiatives like Gaia‑X and the International Data Spaces Association (IDSA) offer federated and standards-based data-sharing frameworks that provide more technical control and sovereignty over cloud infrastructure. But …

Early in the Gaia-X project, open source software advocates warned against corporate capture of Gaia-X by large companies. They referred to Gaia-X as a possible Trojan horse of big tech in Europe, and made comparisons to the French State-sponsored cloud project Andromeda that had been launched 10 years earlier and which resulted in public funds benefiting large non-European industry players. - WikiPedia: Gaia-X


What has worsened

AI‑driven decision-making with no transparency

Predictive AI tools in policing, welfare, child services, and housing now routinely assess risk scores without explainability. In Allegheny County, Pennsylvania, an AI child welfare tool is under DOJ investigation for flagging parents with developmental disabilities, causing wrongful removals.

In Chicago, community organisations are combatting predictive policing algorithms that re‑enforce systemic bias under the guise of data‑driven reform.

Dashboards over human stories

Funders continue to demand dashboard‑ready KPIs and real‑time metrics. Nonprofits strain to feed these systems, often at the expense of nuanced qualitative impact, eroding local insight and prioritising what can be easily measured.

Big Tech-owned infrastructure dominates

Platforms like Salesforce, Google Workspace, Microsoft 365—and cloud providers—have become integral to nonprofit operations. Reliance increases risk of surveillance, vendor lock‑in, and exposure of sensitive beneficiary data via terms of service no one reads.

Cross-border data exposure

In many regions, data hosts are subject to foreign surveillance regimes. Ukraine’s data‐localisation requirement for example, was relaxed just before the Russian invasion—and government data was moved into European data centres. While intended as sovereign protection, it arguably also diluted control.


What is new since 2015

Generative AI, deepfakes and synthetic manipulation

Biased or poisoned synthetic datasets now influence AI outputs. Algorithms trained on skewed or fabricated data can invisibly reinforce harmful stereotypes and decisions—with little oversight.

Behavioural and biometric tracking

Aid agencies under the guise of efficiency increasingly require biometric data—fingerprints, iris, face scans—from refugees and migrants. For example, in Bangladesh’s Rohingya camps, refusing these scans has resulted in suspension of food and fuel aid. The UNHCR‑supported programs in Jordan and Egypt use iris recognition to authenticate cash assistance withdrawals. Human Rights Watch and affiliated critics highlight that this data collection curtails meaningful consent and exposes vulnerable populations to heightened risk when they rely on it to access basic goods. At a broader level, biometric systems are also embedded in African migrant tracking schemes tied to European border enforcement, effectively facilitating surveillance under humanitarian pretence.

Emotion‑recognition tech in schools and protests increasingly profiles and alienates vulnerable populations. Here’s your text with the added academic grounding for the “weak scientific foundations” claim:

Emotion‑recognition AI—flawed, biased, and built on weak scientific foundations— has left classrooms and protest spaces the latest sites of digital profiling. In China and India, systems that ‘read’ emotions during lessons push students to perform for the camera, while in U.S. schools, tools like Gaggle flag emotional states and LGBTQ‑related content as red alerts, without consent or recourse.

At protests, masked demonstrators have been identified using facial and emotion analysis software, stripping anonymity and chilling participation.

Commercial data brokers targeting NGOs

Despite their benevolent mission, NGOs often rely on external service providers to manage beneficiary databases. This exposes sensitive information—names, health status, political affiliation, biometric identifiers—to third‑party data brokers. These brokers then package and sell it to advertisers, government agencies, or law enforcement— often violating the original consent agreements and disproportionately harming marginalised communities.

Hybrid warfare & disinformation ecosystems

Data narratives have become weapons: in conflict zones or disaster areas, “official” data can be manipulated in disinformation campaigns. Governments and actors weaponise harm statistics or census data to suppress communities or justify intervention. For example, during the crisis in Bangladesh’s Chittagong Hill Tracts, misinformation inflated indigenous death counts from four to one hundred through coordinated social media and political amplification, turning community violence into a tool of political suppression. Similarly, in Gaza, conflicting casualty figures published by the Hamas‑run Health Ministry and critiqued by outside analysts have been weaponised by various parties, influencing global narratives and policy positions. More broadly, humanitarian data in conflict contexts are often distorted—whether in Somalia, Afghanistan, Kosovo or Ukraine— to justify intervention or discredit affected populations.

Climate‑data grab zones

Crises zones—hurricanes, floods, drought—see external actors seizing local data under the guise of aid or research, with little accountability or benefit-sharing.

Crises zones—hurricanes, floods, drought—see external actors seizing local data under the guise of aid or research, with little accountability or benefit‑sharing. A 2025 study on North‑Eastern Nigeria and South Sudan showed that humanitarian agencies often collect extensive data from internally displaced persons—yet it’s extracted without feedback, consent, or clarity on usage, undermining agency and trust in affected communities. Critics say this mirrors practices of data colonialism and parachute science, where data flows outward while benefits remain external.

For instance, after the September 2023 floods in Derna, Libya—triggered by Storm Daniel—local humanitarian data (health, locations, damage assessments) were rapidly transferred to international platforms for analytics, with minimal transparency to affected communities. See the ISS analysis and Human Rights Watch account.

Likewise, in the aftermath of Cyclone Freddy in Malawi (March 2023), massive datasets—including displacement maps and vulnerability assessments—were curated primarily by external data entities (e.g. the mwBTFreddy AI dataset), with local communities largely excluded from decision‑making or sharing in the insights derived.


Case studies and recent initiatives

AI for rural mapping—good or bad?

“Bridges to Prosperity” used an AI called WaterNet to map 77 million miles of waterways in underserved regions— tripling previously known mapping coverage. The tool helped plan bridges in Rwanda, Ethiopia, Uganda and Zambia, addressing data inequity in infrastructure planning.

Ethical data labour in AI

Indian startup Karya offers a model that pays rural and marginalised language speakers fairly—and provides them royalties when their voice data is resold. It’s an emerging fair-labour model for AI data production.

Indigenous-led frameworks & biodiversity

GBIF’s Indigenous data governance pilot projects integrate Local Contexts’ Traditional Knowledge Labels into biodiversity records—ensuring provenance metadata and CARE-aligned governance in global data repositories.

Community data justice in rural contexts

Recent academic work describes “community data cooperatives” and combined data‑pool models to enable rural communities to co‑control and benefit from local data—and build equitable AI ecosystems.


  • Who defines impact in 2025? Still mostly funders, tech vendors, and governments—but increasingly mediated by opaque algorithmic impact scores.
  • Informed consent is a myth in many contexts: few end-users of data fully understand what their data may power, share, or enable.
  • Data sovereignty works—but rarely scales. Indigenous communities are ahead: frameworks and infrastructure rooted in CARE principles and local governance models are emerging, though mainstream adoption lags.

Ways forward

  1. Adopt data minimalism: collect only essential data, hold it briefly, and destroy it when no longer needed.
  2. Support community-led infrastructure: data cooperatives, federated clouds, local hosting, encryption‑first tools that embed consent and control.
  3. Centre qualitative narratives: human stories are valid forms of evidence—don’t defer to dashboards alone.
  4. Demand algorithmic transparency clauses in tech vendor contracts and opt‑out rights for disruptive technologies.
  5. Build cross‑movement alliances: privacy advocates, AI ethicists, humanitarian workers, Indigenous and marginalised groups working together to reclaim control.
  6. Push for algorithmic reparations: design tools explicitly to reduce harm and redistribute resources to historically oppressed groups.

2025

The core 2015 critique still stands: data obsession remains a proxy for impact. But in 2025, data has grown more powerful—and more dangerous. If nonprofits don’t seize back control over data practices—who collects it, how it’s used, who benefits—then the weapon once wielded by funders and platforms will soon be aimed at the communities they’re meant to serve.