How do we stop AI-generated 'poverty porn' fake images?

2025-10-2316:331428doi.org

There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications. Researchers like Arsenii…

There is an important and necessary conversation happening right now about the use of generative artificial intelligence in global health and humanitarian communications.

Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering – the very tropes many of us have worked for decades to banish.

The alarms are valid.

The images are harmful.

But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

The problem is not the tool.

The problem is the user.

Generative artificial intelligence is not the cause of poverty porn.

The root cause is the deep-seeded racism and colonial mindset that have defined the humanitarian aid and global health sectors since their inception.

This is not a new phenomenon.

It is a long-standing pattern.

In my private conversations with colleagues and researchers like Alenichev, I find we often agree on this point.

Yet, the public-facing writing and research seem to stop short, focusing on the technological symptom rather than the systemic illness.

It is vital we correct this focus before we implement the wrong solutions.

Long before Midjourney, large organizations and their communications teams were propagating the worst kinds of caricatures.

I know this.

Many of us know this.

We remember the history of award-winning photographers being sent from the Global North to “find… miserable kids” and stage images to meet the needs of funders. Organizations have always been willing to manufacture narratives that “show… people on the receiving end of aid as victims”.

These working cultures, which demand images of suffering, which view Black and Brown bodies as instruments for fundraising, and which prioritize the “western gaze”, existed decades before artificial intelligence.

Artificial intelligence did not create this impulse.

It just made it cheaper, faster, and easier to execute.

It is an enabler, not an originator.

If an organization’s communications philosophy is rooted in colonial stereotypes, it will produce colonial stereotypes, whether it is using a 1000-dollar-a-day photographer or a 30-dollar-a-month software subscription.

If we incorrectly identify artificial intelligence as the cause of this problem, our “solution” will be to ban the technology.

This would be a catastrophic mistake.

First, it is a superficial fix.

It allows the very organizations producing this content to performatively cleanse themselves by banning a tool, all while eluding the fundamental, painful work of challenging their own underlying racism and colonial impulses.

The problem will not be solved. It will simply revert to being expressed through traditional (and often staged) photography.

Second, it punishes the wrong people.

For local actors and other small organizations, generative artificial intelligence is not necessarily a tool for creating poverty porn.

It is a tactical advantage in a fight for survival.

Such organizations may lack the resources for a full communication team.

They are then “punished by algorithms” that demand a constant stream of visuals, burying stories of organizations that cannot provide them.

Furthermore, some organizations committed to dignity in representation are also using artificial intelligence to solve other deep ethical problems.

They use it to create dignified portraits for stories without having to navigate the complex and often extractive issues of child protection and consent.

They use it to avoid exploiting real people.

A blanket ban on artificial intelligence in our sector would disarm small, local organizations.

It would silence those of us trying to use the tool ethically, while allowing the large, wealthy organizations to continue their old, harmful practices unchanged.

This is why I must insist we reframe the debate.

The question is not if we should use artificial intelligence.

The question is, and has always been, how we challenge the racist systems that demand these images in the first place.

My Algerian ancestors fought colonialism.

I cannot separate my work at The Geneva Learning Foundation from the struggle against racism and fighting for the right to tell our own stories.

That philosophy guides how I use any tool, whether it is a word processor or an image generator.

The tool is not the ethic.

We need to demand accountability from organizations like the World Health Organization, Plan International, and even the United Nations.

We must challenge the working cultures that green-light these campaigns.

We should also, as Arsenii rightly points out, support local photographers and artists.

But we must not let organizations off the hook by allowing them to blame a piece of software for their own lack of imagination and their deep, unaddressed colonial legacies.

Artificial intelligence is not the problem.

Our sector’s colonial mindset is.

Image: The Geneva Learning Foundation Collection © 2025

As the primary source for this original work, this article is permanently archived with a DOI to meet rigorous standards of verification in the scholarly record. Please cite this stable reference to ensure ethical attribution of the theoretical concepts to their origin. Learn more

Reda Sadki (2025). How do we stop AI-generated ‘poverty porn’ fake images?. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/03c4y-r2d18


Read the original article

Comments

  • By droptablemain 2025-10-2317:162 reply

    You can't police people into not being racist. People have always been racist/xenophobic to some extent and always will be. It's cultural conflict and tribal in nature.

    • By krapp 2025-10-2317:212 reply

      You can police the execution of people's racist intent, and we often do. Freedom of speech and freedom of association mean racists aren't guaranteed a platform. Many countries (not the US, notably) police "hate speech" on the premise that such speech inevitably leads to hateful actions.

      Arguing from human nature isn't compelling. Rape and murder are part of human nature as well, and people have always done both, yet it isn't controversial to police such behaviors. Racism is no different. We aren't mere animals entirely beholden to our base instincts, after all.

      • By droptablemain 2025-10-2318:101 reply

        I would much rather live in a society that tolerates and shakes off a bit of racism than one that jails people for offensive memes.

        • By krapp 2025-10-2318:14

          Of course I wasn't talking about or advocating jailing people for offensive memes, but I understand this is one of those subjects Hacker News can't approach in good faith and I take the downvotes and shit-eating snark in stride.

      • By _9ptr 2025-10-2317:531 reply

        The practice shows that hate speech is just speech they hate.

        • By krapp 2025-10-2318:15

          Most reasonable people do hate racism, yes.

  • By advisedwang 2025-10-2317:58

  • By redasadki 2025-10-2316:332 reply

    Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish.

    The alarms are valid.

    The images are harmful.

    But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

    The problem is not the tool.

    The problem is the user.

      • By redasadki 2025-10-2316:572 reply

        Yeah, of course. But that's the imperfect best we've been able to do as societies to respond to the needs of the most vulnerable. Unless you think we should just let people die when there is a disaster or a catastrophe that is overwhelming?

        • By PaulHoule 2025-10-2318:09

          We've seen a hollowing out of the state in the core under neoliberalism which on one hand is out-and-out austerity and the other half is the inability to execute which Ezra Klein talks about it.

          In the same time period we've seen donor organizations like the Gates Foundation pursue a model where NGOs pick and choose a few state functions that they'd like to take over in the periphery. This bypassing of the state gets things done in the short term but in the long term it doesn't help countries develop the state capacity to do things themselves.

          My radical proposal is that third world countries develop and tax their economy to provide the services that their people want and that those governments should be accountable to those people. However the NGO-industrial complex is part of the same tendency that erodes state capacity in both the core and periphery.

          Structurally the problem at hand won't go away unless NGOs get past the model of showing people poverty porn to make them donate or believe in the legitimacy of the NGO. In the end they could send a photographer out to a refugee camp to make very similar images that are real and if you think those fake images are harmful the real images are too.

        • By psunavy03 2025-10-2317:12

          [flagged]

    • By Retric 2025-10-2317:162 reply

      The problem is the tool.

      To suggest otherwise is to suggest anyone should be able to buy nuclear weapons which on their own do nothing.

      Bad actors can only leverage what exists. All the benefits and harms comes from the existence of those tools so it’s a good idea to consider if making such things makes the world better or worse.

      • By redasadki 2025-10-2317:521 reply

        This assumes 'we' (ie societies) are in a position to stop it - whether that's nuclear weapons or AI. If we are not, then what can be usefully done is going to shift… by a lot.

        • By Retric 2025-10-2318:18

          > This assumes 'we' (ie societies) are in a position to stop it

          There’s major advantages to understanding the world as it is independent anything else. People make tradeoffs around harm all the time, pretending it doesn’t exist is pointless.

          We can mitigate harm from earthquakes and blizzards independently from our ability to prevent such events. That comes from understanding such events as more than just acts of gods who would happily use other means should we try to mitigate the harm from earthquakes etc.

      • By jmull 2025-10-2317:331 reply

        We might want to treat two things differently when for one of them, its only function is unimaginably massive destruction and for the other it’s to produce words and images.

        • By Retric 2025-10-2318:111 reply

          Treating them differently based on the harm they cause, is still judging them based on the harm they cause rather than treating them as a neutral entity.

          • By jmull 2025-10-2319:03

            I don’t think it makes any sense to ignore the immediate consequences of using/abusing a tool when trying to determine the nature of any regulations or other curbs around that tool.

HackerNews