Google agreed to pay millions for California news. Journalists call it a bad deal

Google agreed to pay millions for California news. Journalists call it a bad deal
Google logo is seen at Google’s Bay View campus in Mountain View, California. (AFP)
Short Url
Updated 23 August 2024
Follow

Google agreed to pay millions for California news. Journalists call it a bad deal

Google agreed to pay millions for California news. Journalists call it a bad deal

SACRAMENTO, California: Google will soon give California millions of dollars to help pay for local journalism jobs in a first-in-the-nation deal, but journalists and other media industry experts are calling it a disappointing agreement that mostly benefits the tech giant.
The agreement, which was hashed out behind closed doors and announced this week, will direct tens of millions of public and private dollars to keep local news organizations afloat. Critics say it’s a textbook political maneuver by tech giants to avoid a fee under what could have been groundbreaking legislation. California lawmakers agreed to kill a bill requiring tech to support news outlets they profit from in exchange for Google’s financial commitment.
By shelving the bill, the state effectively gave up on an avenue that could have required Google and social media platforms to make ongoing payments to publishers for linking news content, said Victor Pickard, professor of media policy and political economy at the University of Pennsylvania. California also left behind a much bigger amount of funding that could have been secured under the legislation, he said.
“Google got off easy,” Pickard said.
Google said the deal will help both journalism and the artificial intelligence sector in California.
“This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy,” Kent Walker, president of global affairs and chief legal officer for Google’s parent company Alphabet, said in a statement.
State governments across the US have been working to help boost struggling news organizations. The US newspaper industry has been in a long decline, with traditional business models collapsing and advertising revenues drying up in the digital era.
As news organizations move from primarily print to mostly digital, they have increasingly relied on Google and Facebook to distribute its content. While publishers saw their advertising revenues nosedive significantly in the last few decades, Google’s search engine has become the hub of a digital advertisement empire that generates more than $200 billion annually.
The Los Angeles Times was losing up to $40 million a year, the newspaper’s owner said in justifying a layoff of more than 100 people earlier this year.
More than 2,500 newspapers have closed since 2005, and about 200 counties across the US do not have any local news outlets, according to a report from Northwestern University’s Medill School of Journalism.
California and New Mexico are funding local news fellowship programs. New York this year became the first state to offer a tax credit program for news outlets to hire and retain journalists. Illinois is considering a bill similar to the one that died in California.
Here’s a closer look into the deal California made with Google this week:
What does the deal entail?
The deal, totaling $250 million, will provide money to two efforts: funding for journalism initiatives and a new AI research program. The agreement only guarantees funding for a period of five years.
Roughly $110 million will come from Google and $70 million from the state budget to boost journalism jobs. The fund will be managed by UC Berkeley’s Graduate School of Journalism. Google will also kick in $70 million to fund the AI research program, which would build tools to help solve “real world problems,” said Assemblymember Buffy Wicks, who brokered the deal.
The deal is not a tax, which is a stark departure from a bill Wicks authored that would have imposed a “link tax” requiring companies like Google, Facebook and Microsoft to pay a certain percentage of advertising revenue to media companies for linking to their content. The bill was modelled after a policy passed in Canada that requires Google to pay roughly $74 million per year to fund journalism.
Why are tech companies agreeing to this now?
Tech companies spent the last two years fighting Wicks’ bill, launching expensive opposition campaigns and running ads attacking the legislation. Google threatened in April to temporarily block news websites from some California users’ search results. The bill had continued to advance with bipartisan support — until this week.
Wicks told The Associated Press on Thursday that she saw no path forward for her bill and that the funding secured through the deal “is better than zero.”
“This represents politics is the art of the possible,” she said.
Industry experts see the deal as a playbook move Google has used across the world to avoid regulations.
“Google cannot exit from news because they need it,” said Anya Schiffrin, a Columbia University professor who studies global media and co-authors a working paper on how much Google and Meta owes to news publishers. “So what they are doing is using a whole lot of different tactics to kill bills that will require them to compensate publishers fairly.”
She estimates that Google owes $1.4 billion per year to California publishers.
Why do journalists and labor unions oppose the agreement?
The Media Guild of the West, a union representing journalists in Southern California, Nevada and Texas, said journalists were locked out of the conversation. The union was a champion of Wicks’ bill but wasn’t included in the negotiations with Google.
“The future of journalism should not be decided in backroom deals,” a letter by the union sent to lawmakers reads. “The Legislature embarked on an effort to regulate monopolies and failed terribly. Now we question whether the state has done more harm than good.”
The agreement results in a much smaller amount of funding compared to what Google gives to newsrooms in Canada and goes against the goal to rebalance Google’s dominance over local news organizations, according to a letter from the union to Wicks earlier this week.
Others also questioned why the deal included funding to build new AI tools. They see it as another way for tech companies to eventual replace them. Wicks’ original bill doesn’t include AI provisions.
The deal has the support of some journalism groups, including California News Publishers Association, Local Independent Online News Publishers and California Black Media.
What’s next?
The agreement is scheduled to take effect next year, starting with $100 million to kickstart the efforts.
Wicks said details of the agreement are still being ironed out. California Gov. Gavin Newsom has promised to include the journalism funding in his January budget, Wicks said, but concerns from other Democratic leaders could throw a wrench in the plan.


Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
Updated 15 sec ago
Follow

Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
  • Saudi-developed ALLaM model will be hosted on Microsoft’s Azure platform

RIYADH: Arabic large language models, or ALLaM, will boost the regional capabilities of artificial intelligence and improve productivity for app developers, according to a Microsoft executive.

His comments came after the announcement that the Saudi-developed ALLaM would be hosted on Microsoft’s Azure platform.

“For Arab developers and people who are developing applications in the Arabic(-speaking) world, there will be a fidelity and an improvement of the operational output that would not come from using some of the other language models,” Anthony Cook, deputy general counsel at Microsoft, told Arab News on the sidelines of the Global AI Summit in Riyadh on Wednesday.

Localized language models like ALLaM are “really the way to release the opportunity of AI much more broadly,” Cook explained.

“I think one of the things we’re focused on as a company is making sure that there is a range of models that are available on the Azure platform that really then meet the different social and business opportunities that exist.”

ALLaM was developed by the Saudi Data and Artificial Intelligence Authority with the intention of enhancing Arabic language AI services and inspiring innovation within the field across Saudi Arabia and internationally.

According to the Arabic massive multi-task language understanding — a standardized test to assess AI performance — ALLaM secured first place in its category.

The language model was developed within the National Center for AI and is built and trained on Microsoft Azure’s robust infrastructure.

The decision to have ALLaM available on Azure emphasizes its advanced capabilities in understanding and generating Arabic content across multiple channels, according to the announcement.

Cook went on to describe the “tremendous work that was put into developing the ALLaM large language model,” saying that “it will have a fidelity that will enable services to be delivered and applications to be built leveraging the large language model, which we’re very excited about.”

Dr. Mishari Al-Mishari, deputy director of SDAIA, said in a statement: “ALLaM represents a significant milestone in our journey towards AI excellence.

“With the general availability on Azure, we are not only expanding access to this powerful language model and advancing AI innovation, but also ensuring that the Arabic language and culture are deeply embedded in this technological evolution.

“Our collaboration with Microsoft marks a significant step forward in our journey to empower government institutions in the Kingdom to effectively leverage the latest advancements in generative AI to enhance public services and improve the quality of life for all.”

Turki Badhris, president of Microsoft Arabia, said that this is a landmark moment in the region and that they are “thrilled to be working alongside our partners at SDAIA to provide a robust platform that supports the development and deployment of advanced AI models tailored to the Arabic language and culture.

“Together, we are paving the way for a new era of AI advancements, collaborations and empowerment in the Kingdom and beyond.”

Badhris also said the AI transformation will help people, nongovernmental organizations, and businesses in all industries to unlock their full potential.

The collaboration between SDAIA and Microsoft also includes the establishment of a center of excellence to expedite the development of AI solutions and the launch of a Microsoft AI academy aimed at harnessing national talent and broadening expertise in the AI sector.

“I think the part that the Kingdom is doing very well is that marriage of aspiration, having a body that can actually orchestrate and implement that across government, and then at the same time, learning from what is going on elsewhere, but adapting that very specifically to what is most important and most relevant in Saudi,” Cook said.

“When I look at AI, one of the parts that is really important is to build confidence that the technology is being used in responsible ways.

“That’s something at Microsoft that we’ve focused on really from the very start of AI and have accelerated our work as generative AI became so prevalent.

“The Kingdom also has done a great job in this. You know, they’ve set out, through SDAIA’s work, the work around ethical principles.

“And the ethical principles underline the way in which the true ethical considerations can be then actually implemented into the practices that are responsible for the development of the technology.”

The GAIN Summit, currently in its third edition, is running from Sept. 10-12 at Riyadh’s King Abdulaziz International Conference Center.


Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
Updated 11 September 2024
Follow

Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
  • Videos show Rapid Support Forces members glorifying destruction, torturing captives
  • Footage could provide evidence for future accountability, says expert

LONDON: Rebel fighters from the Sudanese Rapid Support Forces have posted videos on social media that document their involvement in war crimes, according to a recent report by UK-based newspaper The Guardian.

The footage, which has been verified by the independent non-profit organization Centre for Information Resilience shows fighters destroying properties, burning homes and torturing prisoners.

The films could serve as key evidence in potential war crime prosecutions by international courts.

Alexa Koenig, co-developer of the Berkeley Protocol, which sets stands for social media use in war crime investigations, told The Guardian: “It’s someone condemning themselves. It’s not the same as a guilty plea but in some ways, it is a big piece of the puzzle that war crimes investigators have to put together.”

The RSF has been locked in conflict with the Sudanese military since April 2023, bringing the country to the brink of collapse.

Some estimates suggest there have been up to 150,000 civilian casualties, with 12 million people displaced. This would make Sudan the country with the highest internal displacement rate in the world, according to the UN.

In Darfur’s El Geneina, more than 10,000 people — mostly Masalit — were killed in 2023 during intense fighting. Mass graves, allegedly dug by RSF fighters, were discovered by a UN investigation.

One video posted on X by a pro-RSF account showed a fighter in front of the Masalit sultan’s house declaring: “There are no more Masalit … Arabs only.”

Other footage features fighters walking through streets lined with bodies, which they call “roadblocks,” and scenes of captives being abused and mocked. Some even took selfies with their victims.

The videos offer rare glimpses into the atrocities happening in Sudan, a region largely inaccessible to journalists and NGOs.

In August, Human Rights Watch accused both sides in Sudan’s ongoing conflict of committing war crimes, including summary executions and torture, after analyzing similar social media content.


Australia considering banning children from using social media

Australia considering banning children from using social media
Updated 11 September 2024
Follow

Australia considering banning children from using social media

Australia considering banning children from using social media
  • Australia is the latest country to take action against these platforms
  • Experts voiced concerns ban could fuel underground online activity

LONDON: The Australian government announced Tuesday it is considering banning children from using social media, in a move aimed at protecting young people from harmful online content.

The legislation, expected to pass by the end of the year, has yet to determine the exact age limit, though Prime Minister Anthony Albanese suggested it could be between 14 and 16 years.

“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts,” Albanese told the Australian Broadcasting Corp.

“We want them to have real experiences with real people because we know that social media is causing social harm,” he added, calling the impact a “scourge.”

Several countries in the Asia-Pacific region, including Malaysia, Singapore, and Pakistan, have recently taken action against social media platforms, citing concerns over addictive behavior, bullying, gambling, and cybercrime.

Introducing this legislation has been a key priority for the current Australian government. Albanese highlighted the need for a reliable age verification system before a final decision is made.

The proposal has sparked debate, with digital rights advocates warning that such restrictions might push younger users toward more dangerous, hidden online activity.

Experts voiced concerns during a Parliamentary hearing that the ban could inadvertently harm children by encouraging them to conceal their internet usage.

Meta, the parent company of Facebook and Instagram, which currently enforces a self-imposed minimum age of 13, said it aims to empower young people to benefit from its platforms while providing parents with the necessary tools to support them, rather than “just cutting off access.”


Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
Panelists at GAIN Summit discuss the transformative impact of AI on education. (Supplied)
Updated 10 September 2024
Follow

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
  • KAUST’s president speaks of ‘amazing young talents’ 

RIYADH: The rapid advancement in artificial intelligence requires a comprehensive reevaluation of traditional educational practices and methodologies and careful use of the technology, said panelists at the Global AI Summit, also known as GAIN, which opened in Riyadh on Tuesday.

During the session “Paper Overdue: Rethinking Schooling for Gen AI,” the panelists delved into the transformative impact of AI on education — from automated essay generation to personalized learning algorithms — and encouraged a rethink of the essence of teaching and learning, speaking of the necessity of an education system that seamlessly integrated with AI advancement.

Edward Byrne, president of King Abdullah University of Science and Technology, said the next decade would be interesting with advanced AI enterprises.

He added: “We now have a program to individualize assessment and, as a result, we have amazing young talents. AI will revolutionize the education system.”

Byrne, however, advised proceeding with caution, advocating the need for a “carefully designed AI system” while stressing the “careful use” of AI for “assessment.”

Alain Le Couedic, senior partner at venture firm Artificial Intelligence Quartermaster, echoed the sentiment, saying: “AI should be used carefully in learning and assessment. It’s good when fairly used to gain knowledge and skills.”

Whether at school or university, students were embracing AI, said David Yarowsky, professor of computer science at Johns Hopkins University.

He added: “So, careful use is important as it’s important to enhance skills and not just use AI to leave traditional methods and be less productive. It (AI) should ensure comprehensive evaluation and fair assessment.”

Manal Abdullah Alohali, dean of the College of Computer and Information Science at Princess Nourah bint Abdulrahman University, underlined that AI was a necessity and not a luxury. 

She said the university had recently introduced programs to leverage AI and was planning to launch a “massive AI program next year.”

She explained that the university encouraged its students to “use AI in an ethical way” and “critically examine themselves” while doing so.

In another session, titled “Elevating Spiritual Intelligence and Personal Well-being,” Deepak Chopra, founder of the Chopra Foundation and Chopra Global, explored how AI could revolutionize well-being and open new horizons for personal development.

He said AI had the potential to help create a more peaceful, just, sustainable, healthy, and joyful world as it could provide teachings from different schools of thought and stimulate ethical and moral values.

While AI could not duplicate human intelligence, it could vastly enhance personal and spiritual growth and intelligence through technologies such as augmented reality, virtual reality, and the metaverse, he added.

The GAIN Summit, which is organized by the Saudi Data and AI Authority, is taking place until Sept. 12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.

The summit is focusing on one of today’s most pressing global issues — AI technology — and aims to find solutions that maximize the potential of these transformative technologies for the benefit of humanity.


Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
Updated 10 September 2024
Follow

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
  • Semafor co-founder Ben Smith says he is ‘much more worried about Gen X and older people’ falling for misinformation than younger generations

RIYADH: Media experts are concerned that older generations are more susceptible to AI-generated deep fakes and misinformation than younger people, the audience at the Global AI Summit in Riyadh heard on Tuesday.

“I am so much more worried about Gen X (those born between 1965 and 1980) and older people,” Semafor co-founder and editor-in-chief Ben Smith said during a panel titled “AI and the Future of Media: Threats and Opportunities.”

He added: “I think that young people, for better and for worse, really have learned to be skeptical, and to immediately be skeptical, of anything they’re presented with — of images, of videos, of claims — and to try to figure out where they’re getting it.”

Smith was joined during the discussion, moderated by Arab News Editor-in-Chief Faisal Abbas, by the vice president and editor-in-chief of CNN Arabic, Caroline Faraj, and Anthony Nakache, the managing director of Google MENA.

Semafor co-founder and editor-in-chief Ben Smith.

They said that AI, as a tool, is too important not to be properly regulated. In particular they highlighted its potential for verification of facts and content creation in the media industry, but said educating people about its uses is crucial.

“We have always been looking at how we can build AI in a very safe and responsible way,” said Nakache, who added that Google is working with governments and agencies to figure out the best way to go about this.

The integration of AI into journalism requires full transparency, the panelists agreed. Faraj said the technology offers a multifunctional tool that can be used for several purposes, including data verification, transcription and translation. But to ensure a report contains the full and balanced truth, a journalist will still always be needed to confirm the facts using their professional judgment.

The panelists also agreed that AI would not take important jobs from humans in the industry, as it is designed to complete repetitive manual tasks, freeing up more of a journalist’s time to interact with people and their environment.

“Are you really going to use AI go to a war zone and to the front line to cover stories? Of course not,” said Faraj.

Vice president and editor-in-chief of CNN Arabic, Caroline Faraj.

Smith, who has written a book on news sites and viral content, warned about the unethical ways in which some media outlets knowingly use AI-generated content because they “get addicted” to the traffic such content can generate.

All of the panelists said that educating people is the key to finding the best way forward regarding the role of AI in the media. Nakache said Google has so far trained 20,000 journalists in the region to better equip them with knowledge of how to use digital tools, and funds organizations in the region making innovative use of technology.

“It is a collective effort and we are taking our responsibility,” he added.

Anthony Nakache, the managing director of Google MENA.

The panelists also highlighted some of the methods that can be used to combat confusion and prevent misinformation related to the use of AI, including the use of digital watermarks and programs that can analyze content and inform users if it was AI-generated.

Asked how traditional media organizations can best teach their audiences how to navigate the flood of deep fakes and misinformation, while still delivering the kind of content they want, Faraj said: “You listen to them. We listen to our audience and we hear exactly what they wanted to do and how we can enable them.

“We enable them and equip them with the knowledge. Sometimes we offer training, sometimes we offer listening; but listening is a must before taking any action.”