https://arab.news/wvj2k
- Accessing political content now requires users to go into their settings and actively opt in via their preferences
- “Social media is an essential platform for people to bear witness and speak out against abuses,” HRW says
LONDON: Meta has found itself again under scrutiny after it quietly rolled out a new feature on Instagram that automatically limits users’ exposure to what it considers “political” content.
The tech giant is being accused of censorship during a global election year, with rights groups telling Arab News that the move risks fueling systematic censorship of pro-Palestinian content.
Instagram users discovered the feature, which was first announced on Feb. 9, was implemented on Friday without directly notifying them.
Accessing political content now requires users to go into their settings and actively opt in via their preferences.
Meta’s definition of political content is ambiguous, describing it as likely to mention “government, elections, or social topics that affect a group of people or society at large.”
Meta referred Arab News to a little-noticed statement from February without providing further detail. In explaining the decision, the company said that it wanted to make its platforms “a great experience for everyone.”
“If you decide to follow accounts that post political content, we don’t want to get between you and their posts, but we also don’t want to proactively recommend political content from accounts you don’t follow,” it said.
“Under the United Nations Guiding Principles on Business and Human Rights (UNGPs), companies have a responsibility to avoid infringing on human rights, identify and address the human rights impacts of their operations, and provide meaningful access to a remedy to those whose rights they abused,” Rasha Younes of Human Rights Watch told Arab News.
“For social media companies, including Meta, this responsibility includes aligning their content moderation policies and practices with international human rights standards, ensuring that decisions to take down content are transparent and not overly broad or biased, and enforcing their policies consistently,” Younes said.
The update applies to Explore, Reels, and in-feed recommendations and suggested users that Instagram shows to users.
Meta said that users would still be able to see political content from the accounts they currently followed.
It also stated that accounts flagged by Meta for posting political content could appeal the decision that prevented them from being recommended into the feeds if they believe that it was applied incorrectly.
The announcement of the policy change was also posted on Threads by Adam Mosseri, Meta’s head of Instagram.
Explaining the company’s decision, the American-Israeli businessman said: “Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person’s appetite for it.”
This recent policy is part of Meta’s larger strategy to cut off its services from political and news content, signaling a significant shift in how the company views its role in the information ecosystem.
The company plans to remove the news tab from Facebook in Australia and the US by early April.
“One of the top pieces of feedback we’re hearing from our community right now is that people don’t want politics and fighting to take over their experience on our services,” Meta CEO Mark Zuckerberg said during Facebook’s earnings call in January 2021.
However, the implementation of this recent policy has sparked outrage, particularly in light of the war in Gaza.
“Instagram’s move to limit ‘political content’ on the platform risks fueling censorship of content in support of Palestine, at a time of unspeakable atrocities and repression already stifling Palestinians’ expression. Social media is an essential platform for people to bear witness and speak out against abuses,” Younes said.
Earlier in December, Human Rights Watch accused Meta of participating in a wider wave of online censorship, specifically targeting content in support of Palestine and Palestinian human rights, against the backdrop of the war.
The report documented 1,049 cases in which peaceful pro-Palestine content was taken down or suppressed.
Younes recommended that Meta, “improve transparency around requests by governments’ Internet referral units, including Israel’s Cyber Unit, to remove content ‘voluntarily’— that is, without a court or administrative order to do so — and about its use of automation and machine learning algorithms to moderate or translate Palestine-related content.
“It should carry out due diligence on the human rights impact of temporary changes to its recommendation algorithms that it introduced in response to the hostilities between Israel and Hamas since Oct. 7.”