Advertisement

Commenters reacted negatively to the images, with many expressing unease over Amensty’s use of a technology most often associated with oddball art and memes to depict human rights abuses. Amnesty pushed back, telling Gizmodo it opted to use AI in order to depict the events “without endangering anyone who was present.” Amnesty claims it consulted with partner organizations in Colombia and ultimately decided to use the tech as a privacy-preserving alternative to showing real protestors’ faces.

“Many people who participated in the National Strike covered their faces because they were afraid of being subjected to repression and stigmatization by state security forces,” an Amnesty spokesperson said in an email. “Those who did show their faces are still at risk and some are being criminalized by the Colombian authorities.”

Advertisement

Amnesty went on to say the AI-generated images were a necessary substitute to illustrate the event since many of the cites rights abuses allegedly occurred under the cover of darkness after Colombian security forces cut off electricity access. The spokesperson said the organization added the disclaimer on the bottom of the image noting they were created using AI in an attempt to avoid misleading anyone.

“We believe that if Amnesty International had used the real faces of those who took part in the protests it would have put them at risk of reprisal,” the spokesperson added.

Advertisement

Critics say rights abusers could use AI images to discredit authentic claims

Critical human rights experts speaking with Gizmodo fired back at Amnesty, claiming the use of generative AI could set a troubling precedent and further undermine the credibility of human rights advocates. Sam Gregory, who leads WITNESS, a global human rights network focused on video use, said the Amnesty AI images did more harm than good.

Advertisement

“We’ve spent the last five years talking to 100s of activists and journalists and others globally who already face delegitimization of their images and videos under claims that they are faked,” Gregory told Gizmodo. Increasingly, Gregory said, authoritarian leaders try to bury a piece of audio or video footage depicting a human rights violation by immediately claiming it’s deepfaked.

“This puts all the pressure on the journalists and human rights defenders to ‘prove real’,” Gregory said. “This can occur preemptively too, with governments priming it so that if a piece of compromising footage comes out, they can claim they said there was going to be ‘fake footage.”

Advertisement

Gregory acknowledged the importance of anonymizing individuals depicted in human rights media but said there are many other ways to effectively present abuses without resorting to AI image generators or “tapping into media hype cycles.” Media scholar and author Roland Meyer agreed and said Amnesty’s use of AI could actually “devalue” the work done by reporters and photographers who have documented abuses in Colombia.

Advertisement

A potentially dangerous precedent

Amnesty told Gizmodo it doesn’t currently have any policies for or against using AI-generated images though a spokesperson said the organization’s leaders are aware of the possibility of misuse and try to use the tech sparingly.

Advertisement

“We currently only use it when it is in the interest of protecting human rights defenders,” the spokesperson said. “Amnesty International is aware of the risk to misinformation if this tool is used in the wrong way.”

Gregory said any rule or policy Amnesty does implement regarding the use of AI could prove significant because it could quickly set a precedent others will follow.

Advertisement

“It’s important to think about the role of big global human rights organizations in terms of setting standards and using tools in this way that doesn’t have collateral harms to smaller, local groups who face much more extreme pressures and are targeted repeatedly by their governments to discredit them, Gregory said.