Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic

Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic
Mark Zuckerberg, CEO of Meta, tvowed that the social media giant would push back if it faced such demands again. (AFP)
Short Url
Updated 27 August 2024
Follow

Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic

Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic
  • Officials, including those from the White House, ‘repeatedly pressured’ Facebook for months to take down ‘certain COVID-19 content including humor and satire’
  • The letter is the latest repudiation by Zuckerberg of efforts to target misinformation around the coronavirus pandemic during and after the 2020 presidential election

WASHINGTON: Meta CEO Mark Zuckerberg says senior Biden administration officials pressured Facebook to “censor” some COVID-19 content during the pandemic and vowed that the social media giant would push back if it faced such demands again.
In a letter to Rep. Jim Jordan, the Republican chair of the House Judiciary Committee, Zuckerberg alleges that the officials, including those from the White House, “repeatedly pressured” Facebook for months to take down “certain COVID-19 content including humor and satire.”
The officials “expressed a lot of frustration” when the company didn’t agree, he said in the letter.
“I believe the government pressure was wrong and I regret that we were not more outspoken about it,” Zuckerberg wrote in the letter dated Aug. 26 and posted on the committee’s Facebook page and to its account on X.
The letter is the latest repudiation by Zuckerberg of efforts to target misinformation around the coronavirus pandemic during and after the 2020 presidential election, particularly as allegations have emerged that some posts were deleted or restricted wrongly.
“I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make today,” he said, without elaborating. “We’re ready to push back if something like this happens again.”
In response, the White House said in a statement that, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety. Our position has been clear and consistent: we believe tech companies and other private actors should take into account the effects their actions have on the American people, while making independent choices about the information they present.”
Experts warn this year’s US election could be swamped by misinformation on social media with the proliferation of artificial intelligence and other tools to produce false news stories and content that could mislead voters.
Facebook in early 2021 appended what Zuckerberg called labels with “credible information” to posts about COVID-19 vaccines. That’s after it moved in April 2020 — just as the virus had led to global shutdowns and radical changes in everyday life — to warn users who shared misinformation about COVID-19.
Conservatives have long derided Facebook and other major tech companies as favoring liberal priorities and accused them of censorship.
Zuckerberg has tried to change the company’s perception on the right, going on podcaster Joe Rogan’s show in 2022 and complimenting Republican nominee Donald Trump’s response to an assassination attempt as “badass.” He sent Monday’s letter to the House Judiciary Committee, whose chairman, Jordan, is a longtime Trump ally.
Zuckerberg also said he would no longer donate money to widen election access for voters through the Chan Zuckerberg Initiative, the company that runs the philanthropy for him and his wife, Priscilla Chan.
The couple previously donated $400 million to help local election offices prepare for voters in the 2020 presidential election, with funds used for protective equipment to prevent the spread of the coronavirus at polling sites, drive-thru voting locations and equipment to process mail ballots.
“I know that some people believe this work benefited one party over the other” despite analyzes showing otherwise, he said. “My goal is to be neutral and not play a role one way or another — or to even appear to be playing a role. So I don’t plan on making a similar contribution this cycle.”


Sky News drops anchor following controversial interview with Israeli official

Sky News drops anchor following controversial interview with Israeli official
Updated 12 September 2024
Follow

Sky News drops anchor following controversial interview with Israeli official

Sky News drops anchor following controversial interview with Israeli official
  • In January interview with Israel’s UN Ambassador Danny Danon, presenter Belle Donati compared Israel’s military actions in Gaza to the Holocaust

LONDON: Sky News has not renewed the contract of anchor Belle Donati following backlash over a heated interview with Israel’s UN Ambassador Danny Danon in January.

During the live broadcast, Donati compared Israel’s military actions in Gaza to the Holocaust, sparking widespread criticism. Sky News later issued an on-air apology for her remarks, though Donati did not do so herself.

According to entertainment outlet Deadline on Tuesday, the network chose not to renew Donati’s contract, which expired in early September.

She has not appeared on the channel since the incident, and her social media accounts have been inactive since the interview. Sky News declined to comment further on the matter.

The controversy arose when Donati questioned an op-ed by Danon in the Wall Street Journal that, she alleged, advocated for “ethnic cleansing” in Gaza.

“I will not allow it. Ethnic cleansing, that’s a word you used. If you read my article, I spoke about voluntary immigration,” Danon replied.

Donati said: “The sort of voluntary relocation of many Jewish people during the Holocaust, I imagine.”

The remarks sparked an immediate backlash with Danon, a member of Prime Minister Benjamin Netanyahu’s Likud party, accusing the presenter of antisemitism.

“Shame on you for that comparison,” Danon said. “You should apologize for what you just said.”

Following the broadcast, Danon wrote to Sky News management, calling for Donati’s resignation.

Sky News quickly distanced itself from her comments, labeling them “completely inappropriate” and offering an “unreserved apology” to both Danon and viewers.


Australia threatens fines for social media giants enabling misinformation

Australia threatens fines for social media giants enabling misinformation
Updated 12 September 2024
Follow

Australia threatens fines for social media giants enabling misinformation

Australia threatens fines for social media giants enabling misinformation
  • Breaches face fines up to 5 percent of global revenue
  • Bill seeks to prevent election, public health disinformation

SYDNEY: Australia said it will fine Internet platforms up to 5 percent of their global revenue for failing to prevent the spread of misinformation online, joining a worldwide push to rein in borderless tech giants but angering free speech advocates.
The government said it would make tech platforms set codes of conduct governing how they stop dangerous falsehoods spreading, to be approved by a regulator. The regulator would set its own standard if a platform failed to do so, then fine companies for non-compliance.
The legislation, to be introduced in parliament on Thursday, targets false content that hurts election integrity or public health, calls for denouncing a group or injuring a person, or risks disrupting key infrastructure or emergency services.
The bill is part of a wide-ranging regulatory crackdown by Australia, where leaders have complained that foreign-domiciled tech platforms are overriding the country’s sovereignty, and comes ahead of a federal election due within a year.
Already Facebook owner Meta has said it may block professional news content if it is forced to pay royalties, while X, formerly Twitter, has removed most content moderation since being bought by billionaire Elon Musk in 2022.
“Misinformation and disinformation pose a serious threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy,” said Communications Minister Michelle Rowland in a statement.
“Doing nothing and allowing this problem to fester is not an option.”
An initial version of the bill was criticized in 2023 for giving the Australian Communications and Media Authority too much power to determine what constituted misinformation and disinformation, the term for intentionally spreading lies.
Rowland said the new bill specified the media regulator would not have power to force the takedown of individual pieces of content or user accounts. The new version of the bill protected professional news, artistic and religious content, while it did not protect government-authorized content.
Some four-fifths of Australians wanted the spread of misinformation addressed, the minister said, citing the Australian Media Literary Alliance.
Meta, which counts nearly nine in 10 Australians as Facebook users, declined to comment. Industry body DIGI, of which Meta is a member, said the new regime reinforced an anti-misinformation code it last updated in 2022, but many questions remained.
X was not immediately available for comment.
Opposition home affairs spokesman James Paterson said that while he had yet to examine the revised bill, “Australians’ legitimately-held political beliefs should not be censored by either the government, or by foreign social media platforms.”
The Australia Communications and Media Authority said it welcomed “legislation to provide it with a formal regulatory role to combat misinformation and disinformation on digital platforms.”


AI must reflect human values for successful future job market, industry experts say

AI must reflect human values for successful future job market, industry experts say
Updated 6 min 17 sec ago
Follow

AI must reflect human values for successful future job market, industry experts say

AI must reflect human values for successful future job market, industry experts say

RIYADH: Inserting human values into AI to ensure that the job market achieved a balance between the need for automation and the need for human input was vital, experts at the Global AI Summit in Riyadh said on Wednesday. 

During a panel discussion, titled “Job Disruption: Is it All Lost?,” Ray Wang, chairman and CEO of Constellation Research Inc, addressed this concern.

“We have to … make sure that we actually continue to operate at a machine-level scale and at a human scale, bringing those two areas together,” he said.

“When we think about the Internet age, it was open, it was decentralized — things were cheaper, we had a lot of players. This is closed, this is centralized. This is more expensive, and only a few will win … We have to work double as hard to make sure that jobs are going to be there.”

Wang said that jobs would not be “all lost” if the industry ensured a balance between the jobs that were replaced and the jobs that were created.

He said that it was the education system’s responsibility to teach children the right sets of skills t0 prepare them for future positions.

Mohamed Elhoseiny, associate professor of computer science at KAUST, echoed Wang’s view. He added that AI models needed to be developed to complement schooling rather than misusing AI to plagiarize work.

Elhoseiny also spoke about the importance of inserting human goals into AI designs and emphasized that humans were more powerful when working with AI than alone.

“A big problem right now in our schools is kids can use ChatGPT, for example, to solve problems. But this does not contribute to the very goal of developing the skills of the children, so how can we … help children do more and gain the skillsets, and how do we do that in away that aligns with our (human) goals,” he said.

Nancy Giordano, author and founder of Play Big Inc., wants to embrace the new job market that will be created hand-in-hand with AI.

“Are we trying to hold on to jobs so that we can protect an economic system that we may have outgrown?” she said.

But for that future model to succeed, there was a need rethink the approach to AI application, she said.

“How do we prepare economically for that kind of world?” Giordano asked. “We have not built the scaffolding for this new era that we’re heading into.”

Wang said that the PESTLE model, a framework that analyzes external factors within political, economic, social, technological, legal and environmental factors, was “perfect for the scaffolding” of AI.

“And now is the time to actually do that,” he said.


The future tech helping to uncover hidden secrets of Saudi Arabia’s past

The future tech helping to uncover hidden secrets of Saudi Arabia’s past
Updated 11 September 2024
Follow

The future tech helping to uncover hidden secrets of Saudi Arabia’s past

The future tech helping to uncover hidden secrets of Saudi Arabia’s past
  • Researchers at KAUST are developing AI models to help archaeologists and researchers in many other academic fields

RIYADH: Far from fearing a future powered by AI, researchers at King Abdulah University for Science and Technology are using it to uncover long-hidden secrets about Saudi Arabia’s past.

Prof. Bernard Ghanem, a specialist in computer vision and machine learning, said that in particular, AI is helping to discover archaeological sites that have yet to be unearthed.

“AI has applications in every part of our lives: analyzing the present, the future as well as the past,” Ghanem told Arab News.

His team at KAUST has trained AI models, using satellite data and images of known historical sites, to assist them in the identification of undiscovered sites across the country, he said. The resultant findings have fueled further archaeological research and are helping to preserve the Kingdom's rich cultural heritage.

However, archaeology is just one of the many areas of study in which Ghanem’s team is exploring the potential benefits of AI technology.

At the Image and Video Understanding Lab, for example, researchers are focusing on four main applications of AI, mostly rooted in machine learning, a branch of AI in which systems use existing data to help them solve problems using statistics and algorithms.

The first involves building machine-learning models specifically for use with video to harness the popularity and power of streaming.

“Video is the biggest big data out there; more than 80 percent of the internet traffic that we see is because of video,” said Ghanem, whose team is developing tools to analyze, retrieve, and even create videos, thereby leveraging the ubiquity of video in new AI applications.

The second application, which uses machine learning and deep learning to aid automation, is investigating the ways in which two-dimensional simulation data can be translated into the 3D world, with potential applications in gaming, robotics and other real-world scenarios.

“How do you, for example, play a game in the simulated world and then have that … work in the real world?” Ghanem said.

The third is exploring the foundations of machine learning, with a focus on identifying weaknesses in generative AI models and finding ways to improve them and prevent failures.

Ghanem compared this process to building immunity, whereby the AI models are deliberately “broken” to help understand vulnerabilities so that can be addressed and the models strengthened.

The fourth application involved the use of AI for science, specifically its use in efforts to advance chemical research.

Ghanem said his team is developing AI models able to act as virtual chemistry assistants by predicting the properties of molecules and perhaps discovering new compounds. Such innovations, he added, could play a critical role in the study and research of topics such as catalysis and direct air capture, thereby boosting efforts to combat climate change.

Ghanem also highlighted the environmental potential of AI, and the new Center of Excellence for Generative AI at KAUST, which he chairs. The center, which is due to open on Sunday, will explore four key pillars of research relating to: health and wellness; sustainability; energy and industrial leadership; and future economies.

“That’s where we’re going to focus on GenAI methods for sustainability,” Ghanem said.


Company transparency in the spotlight at Global AI Summit 

Company transparency in the spotlight at Global AI Summit 
Updated 11 September 2024
Follow

Company transparency in the spotlight at Global AI Summit 

Company transparency in the spotlight at Global AI Summit 
  • ‘Decision-making points must remain human,’ says PwC executive

RIYADH: The importance of transparency and responsibility when using artificial intelligence came under scrutiny at the Global AI Summit.

Ali Hosseini, chief technology officer for PWC Middle East, told Arab News the consultancy company had created “Responsible AI” — an approach to managing risks associated with AI-based solutions. 

The initiative gave customers a clear picture of how the company used their data, he said.

“We take this to customers and we actually share the experience in terms of how we’re using it internally. So there are a number of areas in terms of general education for the employees and (in) what kind of cases they can use AI and in what kind of context they can depend on the output,” he explained.

Hosseini said companies thinking of implementing AI must ensure “the empowerment of the employees, self-responsibility, and AI use.” 

“We give (employees) the right tools coming from the right kind of credible sources to use on the day-to-day automation of tasks or augmenting their knowledge,” he told Arab News, concluding the interview with a key takeaway.

“(There is) a level of self-responsibility that people need to basically take an education in order to use AI … We advise the organization to use (AI) as giving you a basically augmented decision, but not the full decision … The decision-making point is always the human, not AI.” 

Ali Hosseini, chief technology officer for PWC Middle East

In a panel discussion at the summit, Priya Nagpurkar, Vice President of the Hybrid Cloud and AI Platform at IBM Research, said AI was created to “enhance and support human capacity, intelligence and expertise and not to replace it, and do so in a very transparent, explainable and responsible way.”

IBM has created watsonx.governance, an AI and data platform which monitors, directs and manages organizations’ AI activities.

Similar to a “nutrition label,” IBM creates factsheets which document AI model metadata across its lifecycle.

“These factsheets are a way of extracting the key facts that went into the data curation in that part of the lifecycle,” she explained. “(A) concrete example is, let’s say you are building an AI application to look at loan applications. The type of facts you want to know about the model you want to use are if there was bias in the data that went to training that model, was it evaluated? And was there a range of variation?”  

The GAIN Summit, organized by the Saudi Data and AI Authority, takes place from Sept. 10-12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.