LONDON: WhatsApp is allegedly being used to target Palestinians through Israel’s contentious artificial intelligence system, Lavender, which has been linked to the deaths of Palestinian civilians in Gaza, recent reports have revealed.
Earlier this month, Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call published a report by journalist Yuval Abraham, exposing the Israeli army’s use of an AI system capable of identifying targets associated with Hamas or Palestinian Islamic Jihad.
This revelation, corroborated by six Israeli intelligence officers involved in the project, has sparked international outrage, as it suggested Lavender has been used by the military to target and eliminate suspected militants, often resulting in civilian casualties.
In a recent blog post, software engineer and activist Paul Biggar highlighted Lavender’s reliance on WhatsApp.
He pointed out how membership in a WhatsApp group containing a suspected militant can influence Lavender’s identification process, highlighting the pivotal role messaging platforms play in supporting AI targeting systems like Lavender.
“A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same WhatsApp group as a suspected militant,” Bigger wrote. “There’s a lot wrong with this.”
He explained that users often find themselves in groups with strangers or acquaintances.
A lot of difficult questions for Meta before that trust can be rebuilt, and I don't honestly believe that Meta can or will answer them pic.twitter.com/vaeLbg9hx3
— Paul Biggar (@paulbiggar) April 16, 2024
Biggar also suggested that WhatsApp’s parent company, Meta, may be complicit, whether knowingly or unknowingly, in these operations.
He accused Meta of potentially violating international humanitarian law and its own commitments to human rights, raising questions about the privacy and encryption claims of WhatsApp’s messaging service.
The revelation is just the latest of Meta’s perceived attempts to silence pro-Palestinian voices.
Since before the beginning of the conflict, the Menlo Park giant has faced accusations of double standards favoring Israel.
In February, the Guardian revealed that Meta was considering the expansion of its hate speech policy to the term “Zionist.”
More recently, Meta quietly introduced a new feature on Instagram that automatically limits users’ exposure to what it deems “political” content, a decision criticized by experts as a means of systematically censoring pro-Palestinian content.
Responding to requests for comment, a WhatsApp spokesperson said that the company could not verify the accuracy of the report but assured that “WhatsApp has no backdoors and does not provide bulk information to any government.”