The refugee tents stretch out across a vast sandy desert — tens of thousands lined up in neat rows, some organized in the center to spell out: “All Eyes on Rafah.”
It’s a viral post shared by millions of people across the globe — including Nobel Peace Prize winner Malala Yousafzai, “Bridgerton” actress Nicola Coughlan, and model Bella Hadid — in response to Israel’s missile strike Sunday that killed dozens of civilians in a camp for displaced Palestinians in the southern Gaza Strip city.
But the image, which features blurring, unusual shadows and pattern repetition typical of artificial intelligence, appears to be fake. Experts say it represents a new form of AI-generated activist imagery.
“It’s one of the first major examples of AI being used in viral activism,” said Matt Navarra, a social media consultant based in the United Kingdom. “This is an evolution of what we’ve seen before in using social media platforms to generate a message that potentially can go viral and draw the media and politicians’ attention to a particular cause.”
Generating an image from AI, Navarra said, can allow activists to avoid breaching copyright or circumvent social media platform rules on violence and incitement. Over the course of the Israel-Hamas war, many activists, journalists and human rights organizations have complained that social media companies such as Meta, which owns Instagram and Facebook, have removed images and videos of graphic content and violence in Gaza, including photos of injured and killed Palestinians.
“The average person isn’t particularly good at using Photoshop or sourcing content, which is not going to breach copyright, or shows horrible things that people don’t like or Meta wouldn’t want posted on the platform,” Navarra said. “Certainly, AI opens up that opportunity to create viral activist posts.”
But just as the advent of generative AI offers activists an easier and cheaper tool to create powerful images without relying on stock photos or breaching Meta rules, it could also lead to potential abuse. Experts worry that some content creators are likely to create highly engaging, AI-generated images to promote false information or add links to spam.
“The goal may well be to get lots of viral attention, get lots of likes, get lots of shares, get lots of comments,” Navarra said. “And then once it’s gone viral to flip that page and add links to spam content or phishing content or to use it in other negative or spammy ways.”
Others say that AI images offer a sanitized version of the Rafah refugee camp at a time when journalists and activists are struggling to share real photos from Gaza on social media.
“People have been posting really graphic and disturbing content to raise awareness and that gets censored while a piece of synthetic media goes viral, it is disturbing,” said Deborah Brown, a senior researcher and advocate on digital rights at Human Rights Watch. She co-authored a report last December.
The phrase “All Eyes on Rafah” was coined before Israel launched its assault against Hamas in Rafah on May 6, causing more than 1 million people to flee the southern Gaza city near the Egyptian border.
Rik Peeperkorn, who leads the World Health Organization’s office for Gaza and the West Bank, used the phrase in February when he spoke out against Israeli plans to enter the area around Rafah in an attempt to destroy Hamas strongholds.
“All eyes are on Rafah,” Peeperkorn said at a WHO press briefing, noting that Rafah was one of the last remaining refuges for displaced Palestinians.
The slogan — adopted by pro-Palestinian and humanitarian groups in recent months and scrawled on placards across university campuses — trended on social media after Sunday, when 45 people were killed and 200 people were injured by an Israeli strike, according to the Gaza Health Ministry. Prime Minister Benjamin Netanyahu called the strike, the single deadliest attack on the city since Israel launched its offensive three weeks ago, a “tragic mistake.”
Instagram has yet to label the viral “All Eyes on Rafah” graphic, shared more than 40 million times on Instagram, to let users know it was generated by AI.
“If content that’s synthetic isn’t being labeled and is confusing or muddying the waters in terms of what’s actually happening on the ground, that’s a problem,” Brown said.
Some photos of Rafah’s refugee encampment after Israel’s missile strike show a more grim and chaotic reality than the neat camp under blue skies shown by AI: in the hours after the missile strike, the camp was strewn with debris and all that was left of some tents were smoldering orange embers billowing gray smoke into a gray sky.
Since Israel’s Sunday attack on Rafah, journalists and activists have said they have not been able to share photos on social media of the human cost of the strikes. On Monday, Lila Hassan, an independent investigative journalist based in New York, posted that she had not been able to engage or post anything on Instagram since she tried to share an image of a beheaded child in Rafah.
“I haven’t gotten a warning, and I can’t even submit a complaint,” Hassan wrote on the social media platform X. “This is censorship I can’t bypass.”
According to Human Rights Watch, social media companies such as Meta have repeatedly removed images and videos of graphic content and violence in Gaza, including photos of injured and murdered Palestinians.
Last December, the New York-based nongovernmental organization published a 51-page report documenting Meta’s political restrictions during the Israel-Hamas war: “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook.” Between October and November 2023, the report said, it documented over 1,050 “takedowns and other suppression of content” on Instagram and Facebook posted by Palestinians and their supporters.
Among the censored photos were a video of Israelis urinating on Palestinians and a Palestinian child shouting “Where are the Arabs?” after his sister was killed. Meta cited them as violating its policy on violence and incitement.
“In these cases,” the report argued, “the news value of the shared material was such that it is hard to justify a decision to block this content on the basis of a policy on violence and incitement.”
After “All Eyes on Rafah” went viral, some pro-Israel activists responded with counterposts: “Where were your eyes on October 7?” said one featuring a depiction of a Hamas soldier standing over an infant with red hair that was later removed by Instagram according to the The Times of Israel. “If your eyes are on Rafah, help us find our hostages,” said another Instagram story created by Bring Them Home Now.
Other Jewish critics condemned “All Eyes on Rafah” as encouraging “lazy” and “unproductive” activism.
“What does sharing an AI image that looks nothing like Gaza actually do?” wrote Josh Kaplan, head of digital for the London-based Jewish Chronicle.
“The All Eyes on Rafah post is another vapid, lazy way to say ‘I care,’ not ‘I care about bringing the conflict to an end with as little human suffering as possible,’” not even “I care about all civilians killed,” Kaplan said. “It says nothing productive.”