LONDON: A new investigative report on Monday revealed that Facebook failed to thwart extremist content on the platform, instead tagging photos of beheadings and violent hate speech from Daesh and the Taliban as “insightful” and “engaging.”
According to the report by the Institute for Strategic Dialogue, a think tank that tracks online extremism, extremists have turned to the social media platform as a weapon “to promote their hate-filled agenda and rally supporters” on hundreds of groups, varying in size.
These groups were discovered by Moustafa Ayad, an executive director in the institute.
“It’s just too easy for me to find this stuff online,” he said. “What happens in real life happens in the Facebook world.
“It’s essentially trolling — it annoys the group members and similarly gets someone in moderation to take note, but the groups often don’t get taken down. That's what happens when there’s a lack of content moderation,” Ayad added.
These groups have popped up across Facebook over the past 18 months. Some of the posts were tagged as “insightful” and “engaging” by a new Facebook tool released in November.
The findings of the report were shared with Politico, who notified Meta about the presence of these groups. Meta subsequently removed Facebook groups promoting Islamic extremist content.
“We have removed the Groups brought to our attention,” a Meta spokesperson said. “We don’t allow terrorists on our platform and remove content that praises, represents or supports them whenever we find it.
“We know that our enforcement isn’t always perfect, which is why we are continuing to invest in people and technology to remove this type of activity faster, and to work with experts in terrorism, violent extremism and cyber intelligence to disrupt misuse of our platform,” the statement concluded.
In October, documents leaked by whistleblower Frances Haugen revealed that Facebook’s automated systems, designed to identify hate speech and extremist content, struggles when it comes to non-English languages.
Arabic poses particular challenges to these automated systems and human moderators, both of which can struggle to understand spoken dialects.
The documents reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts.