WhatsApp’s sticker creator generates image of armed child for prompts including ‘Palestine’: The Guardian

WhatsApp’s sticker creator generates image of armed child for prompts including ‘Palestine’: The Guardian
Meta was accused of bias after users reported having posts supportive of Palestinians removed. (Guardian/Sourced)
Short Url
Updated 03 November 2023
Follow

WhatsApp’s sticker creator generates image of armed child for prompts including ‘Palestine’: The Guardian

WhatsApp’s sticker creator generates image of armed child for prompts including ‘Palestine’: The Guardian
  • Prompts like 'Israeli boy' generated drawings of children playing football and reading
  • A Meta spokesperson said the company was aware of the issue and addressing it

LONDON: An artificial intelligence image generator for WhatsApp returns pictures of boys bearing arms when prompted with the terms “Palestine,” “Palestinian,” or “Muslim boy Palestinian,” The Guardian found on Thursday.

The WhatsApp feature allows users to create their own stickers using typed prompts.

Another prompt tested by the British newspaper for “Palestine” yielded an image of a man carrying what seems to be an AK-47 rifle.

However, for prompts like “Israeli boy,” the same Meta-owned AI sticker creator generated drawings of children playing football and reading.

Prompted with “Israel army,” the feature created illustrations of soldiers smiling and praying, while the prompt “Israel” returned a dancer wearing blue and a man holding the Israeli flag. 

Meta employees have reported the issue to the company, according to The Guardian.

Meta spokesperson Kevin McAlister said the company was aware of the issue and taking steps to resolve it.

“As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems,” he told the Guardian.

“We’ll continue to improve these features as they evolve and more people share their feedback.”

The discovery came after Meta drew scrutiny following complaints from Instagram and Facebook users, who said the Meta-owned social media platforms were censoring pro-Palestinian posts amid the ongoing Israeli violence in the Gaza Strip.

Meta was accused of bias after users reported having posts supportive of Palestinians removed or shadow banned.

Human Rights Watch on Tuesday urged social media users to report incidents of censorship, particularly on Meta’s Instagram and Facebook, regarding the Israel-Palestine conflict.

Instagram users also reported that the platform translated the Arabic phrase meaning “Palestinian praise be to Allah” into “Palestinian terrorist.” The company apologized and blamed the issue on a “glitch.”

This is not the first time Meta has received criticism from Palestinian activists, creators and journalists. A study commissioned by Meta in September 2022 concluded that Facebook and Instagram’s content policies during Israeli attacks on Gaza in May 2021 violated Palestinian human rights, including “freedom of expression, freedom of assembly, political participation, and non-discrimination.”