Elon Musk should face sanctions, arrest for inciting UK rioters, ex-Twitter chief says

Elon Musk should face sanctions, arrest for inciting UK rioters, ex-Twitter chief says
Elon Musk purchased the X platform (formerly Twitter) in 2022. (File/Reuters)
Short Url
Updated 13 August 2024
Follow

Elon Musk should face sanctions, arrest for inciting UK rioters, ex-Twitter chief says

Elon Musk should face sanctions, arrest for inciting UK rioters, ex-Twitter chief says
  • Musk said that “civil war is inevitable” in the UK

LONDON: Elon Musk should face “personal sanctions” and potentially even the threat of an “arrest warrant” if he is found to have encouraged public disorder on his social media platform X, according to a former Twitter executive.

Bruce Daisley, who previously served as Twitter’s vice president for Europe, the Middle East, and Africa, argued on Monday that it was unacceptable for the billionaire owner of X, formerly Twitter, to sow discord without facing personal consequences.

Daisley’s comments come in the aftermath of week-long far-right riots across England and Northern Ireland, during which asylum hotels and mosques were attacked, The Guardian reported on Monday.

The unrest was stirred up and inflamed by online posts, including one from Musk stating that “civil war is inevitable” in the UK — a remark that Justice Minister Heidi Alexander condemned as “unacceptable.”

Musk also criticized Prime Minister Keir Starmer, labeling him “two-tier Keir” and a “hypocrite” for his handling of policing, and shared, then deleted, a false post claiming Starmer planned to establish “detainment camps” in the Falkland Islands.

Daisley suggested that Starmer should “beef up” online safety laws and reconsider whether the media regulator, Ofcom, is adequately equipped to manage the rapid actions of individuals like Musk.

He argued that the threat of personal sanctions could be more effective than corporate fines, particularly in influencing the lifestyles of tech billionaires.

Daisley, who worked at Twitter from 2012-2020, described Musk as someone who “has taken on the aura of a teenager on the bus with no headphones, creating lots of noise,” The Guardian reported.

“Were Musk to continue stirring up unrest, an arrest warrant for him might produce fireworks from his fingertips, but as an international jet-setter it would have the effect of focusing his mind.

“Musk’s actions should be a wake-up call for Starmer’s government to quietly legislate to take back control of what we collectively agree is permissible on social media.”

He argued: “In my experience, that threat of personal sanction is much more effective on executives than the risk of corporate fines.

“The question we are presented with is whether we’re willing to allow a billionaire oligarch to camp off the UK coastline and take potshots at our society. The idea that a boycott — whether by high-profile users or advertisers — should be our only sanction is clearly not meaningful.”

Daisley also pointed out that under existing laws, Musk and other executives could be held criminally liable for their actions.

He called for Britain’s Online Safety Act 2023 to be strengthened as a matter of urgency and added: “Musk might force his angry tweets to the top of your timeline, but the will of a democratically elected government should mean more than the fury of a tech oligarch — even him.”

Following the fatal stabbing of three young girls at a Taylor Swift-themed holiday dance class in Southport last month, the UK government has urged social media platforms to act responsibly, accusing them of enabling the spread of false claims about the attacker being an asylum-seeker. Police are already increasingly targeting individuals suspected of using online posts to incite violence, The Guardian reported.

The Yazidi nightmare
Ten years after the genocide, their torment continues

Enter


keywords

Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
Updated 15 sec ago
Follow

Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
  • Saudi-developed ALLaM model will be hosted on Microsoft’s Azure platform

RIYADH: Arabic large language models, or ALLaM, will boost the regional capabilities of artificial intelligence and improve productivity for app developers, according to a Microsoft executive.

His comments came after the announcement that the Saudi-developed ALLaM would be hosted on Microsoft’s Azure platform.

“For Arab developers and people who are developing applications in the Arabic(-speaking) world, there will be a fidelity and an improvement of the operational output that would not come from using some of the other language models,” Anthony Cook, deputy general counsel at Microsoft, told Arab News on the sidelines of the Global AI Summit in Riyadh on Wednesday.

Localized language models like ALLaM are “really the way to release the opportunity of AI much more broadly,” Cook explained.

“I think one of the things we’re focused on as a company is making sure that there is a range of models that are available on the Azure platform that really then meet the different social and business opportunities that exist.”

ALLaM was developed by the Saudi Data and Artificial Intelligence Authority with the intention of enhancing Arabic language AI services and inspiring innovation within the field across Saudi Arabia and internationally.

According to the Arabic massive multi-task language understanding — a standardized test to assess AI performance — ALLaM secured first place in its category.

The language model was developed within the National Center for AI and is built and trained on Microsoft Azure’s robust infrastructure.

The decision to have ALLaM available on Azure emphasizes its advanced capabilities in understanding and generating Arabic content across multiple channels, according to the announcement.

Cook went on to describe the “tremendous work that was put into developing the ALLaM large language model,” saying that “it will have a fidelity that will enable services to be delivered and applications to be built leveraging the large language model, which we’re very excited about.”

Dr. Mishari Al-Mishari, deputy director of SDAIA, said in a statement: “ALLaM represents a significant milestone in our journey towards AI excellence.

“With the general availability on Azure, we are not only expanding access to this powerful language model and advancing AI innovation, but also ensuring that the Arabic language and culture are deeply embedded in this technological evolution.

“Our collaboration with Microsoft marks a significant step forward in our journey to empower government institutions in the Kingdom to effectively leverage the latest advancements in generative AI to enhance public services and improve the quality of life for all.”

Turki Badhris, president of Microsoft Arabia, said that this is a landmark moment in the region and that they are “thrilled to be working alongside our partners at SDAIA to provide a robust platform that supports the development and deployment of advanced AI models tailored to the Arabic language and culture.

“Together, we are paving the way for a new era of AI advancements, collaborations and empowerment in the Kingdom and beyond.”

Badhris also said the AI transformation will help people, nongovernmental organizations, and businesses in all industries to unlock their full potential.

The collaboration between SDAIA and Microsoft also includes the establishment of a center of excellence to expedite the development of AI solutions and the launch of a Microsoft AI academy aimed at harnessing national talent and broadening expertise in the AI sector.

“I think the part that the Kingdom is doing very well is that marriage of aspiration, having a body that can actually orchestrate and implement that across government, and then at the same time, learning from what is going on elsewhere, but adapting that very specifically to what is most important and most relevant in Saudi,” Cook said.

“When I look at AI, one of the parts that is really important is to build confidence that the technology is being used in responsible ways.

“That’s something at Microsoft that we’ve focused on really from the very start of AI and have accelerated our work as generative AI became so prevalent.

“The Kingdom also has done a great job in this. You know, they’ve set out, through SDAIA’s work, the work around ethical principles.

“And the ethical principles underline the way in which the true ethical considerations can be then actually implemented into the practices that are responsible for the development of the technology.”

The GAIN Summit, currently in its third edition, is running from Sept. 10-12 at Riyadh’s King Abdulaziz International Conference Center.


Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
Updated 11 September 2024
Follow

Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
  • Videos show Rapid Support Forces members glorifying destruction, torturing captives
  • Footage could provide evidence for future accountability, says expert

LONDON: Rebel fighters from the Sudanese Rapid Support Forces have posted videos on social media that document their involvement in war crimes, according to a recent report by UK-based newspaper The Guardian.

The footage, which has been verified by the independent non-profit organization Centre for Information Resilience shows fighters destroying properties, burning homes and torturing prisoners.

The films could serve as key evidence in potential war crime prosecutions by international courts.

Alexa Koenig, co-developer of the Berkeley Protocol, which sets stands for social media use in war crime investigations, told The Guardian: “It’s someone condemning themselves. It’s not the same as a guilty plea but in some ways, it is a big piece of the puzzle that war crimes investigators have to put together.”

The RSF has been locked in conflict with the Sudanese military since April 2023, bringing the country to the brink of collapse.

Some estimates suggest there have been up to 150,000 civilian casualties, with 12 million people displaced. This would make Sudan the country with the highest internal displacement rate in the world, according to the UN.

In Darfur’s El Geneina, more than 10,000 people — mostly Masalit — were killed in 2023 during intense fighting. Mass graves, allegedly dug by RSF fighters, were discovered by a UN investigation.

One video posted on X by a pro-RSF account showed a fighter in front of the Masalit sultan’s house declaring: “There are no more Masalit … Arabs only.”

Other footage features fighters walking through streets lined with bodies, which they call “roadblocks,” and scenes of captives being abused and mocked. Some even took selfies with their victims.

The videos offer rare glimpses into the atrocities happening in Sudan, a region largely inaccessible to journalists and NGOs.

In August, Human Rights Watch accused both sides in Sudan’s ongoing conflict of committing war crimes, including summary executions and torture, after analyzing similar social media content.


Australia considering banning children from using social media

Australia considering banning children from using social media
Updated 11 September 2024
Follow

Australia considering banning children from using social media

Australia considering banning children from using social media
  • Australia is the latest country to take action against these platforms
  • Experts voiced concerns ban could fuel underground online activity

LONDON: The Australian government announced Tuesday it is considering banning children from using social media, in a move aimed at protecting young people from harmful online content.

The legislation, expected to pass by the end of the year, has yet to determine the exact age limit, though Prime Minister Anthony Albanese suggested it could be between 14 and 16 years.

“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts,” Albanese told the Australian Broadcasting Corp.

“We want them to have real experiences with real people because we know that social media is causing social harm,” he added, calling the impact a “scourge.”

Several countries in the Asia-Pacific region, including Malaysia, Singapore, and Pakistan, have recently taken action against social media platforms, citing concerns over addictive behavior, bullying, gambling, and cybercrime.

Introducing this legislation has been a key priority for the current Australian government. Albanese highlighted the need for a reliable age verification system before a final decision is made.

The proposal has sparked debate, with digital rights advocates warning that such restrictions might push younger users toward more dangerous, hidden online activity.

Experts voiced concerns during a Parliamentary hearing that the ban could inadvertently harm children by encouraging them to conceal their internet usage.

Meta, the parent company of Facebook and Instagram, which currently enforces a self-imposed minimum age of 13, said it aims to empower young people to benefit from its platforms while providing parents with the necessary tools to support them, rather than “just cutting off access.”


Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
Panelists at GAIN Summit discuss the transformative impact of AI on education. (Supplied)
Updated 10 September 2024
Follow

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit

Rapid advancement in AI requires comprehensive reevaluation, careful use, say panelists at GAIN Summit
  • KAUST’s president speaks of ‘amazing young talents’ 

RIYADH: The rapid advancement in artificial intelligence requires a comprehensive reevaluation of traditional educational practices and methodologies and careful use of the technology, said panelists at the Global AI Summit, also known as GAIN, which opened in Riyadh on Tuesday.

During the session “Paper Overdue: Rethinking Schooling for Gen AI,” the panelists delved into the transformative impact of AI on education — from automated essay generation to personalized learning algorithms — and encouraged a rethink of the essence of teaching and learning, speaking of the necessity of an education system that seamlessly integrated with AI advancement.

Edward Byrne, president of King Abdullah University of Science and Technology, said the next decade would be interesting with advanced AI enterprises.

He added: “We now have a program to individualize assessment and, as a result, we have amazing young talents. AI will revolutionize the education system.”

Byrne, however, advised proceeding with caution, advocating the need for a “carefully designed AI system” while stressing the “careful use” of AI for “assessment.”

Alain Le Couedic, senior partner at venture firm Artificial Intelligence Quartermaster, echoed the sentiment, saying: “AI should be used carefully in learning and assessment. It’s good when fairly used to gain knowledge and skills.”

Whether at school or university, students were embracing AI, said David Yarowsky, professor of computer science at Johns Hopkins University.

He added: “So, careful use is important as it’s important to enhance skills and not just use AI to leave traditional methods and be less productive. It (AI) should ensure comprehensive evaluation and fair assessment.”

Manal Abdullah Alohali, dean of the College of Computer and Information Science at Princess Nourah bint Abdulrahman University, underlined that AI was a necessity and not a luxury. 

She said the university had recently introduced programs to leverage AI and was planning to launch a “massive AI program next year.”

She explained that the university encouraged its students to “use AI in an ethical way” and “critically examine themselves” while doing so.

In another session, titled “Elevating Spiritual Intelligence and Personal Well-being,” Deepak Chopra, founder of the Chopra Foundation and Chopra Global, explored how AI could revolutionize well-being and open new horizons for personal development.

He said AI had the potential to help create a more peaceful, just, sustainable, healthy, and joyful world as it could provide teachings from different schools of thought and stimulate ethical and moral values.

While AI could not duplicate human intelligence, it could vastly enhance personal and spiritual growth and intelligence through technologies such as augmented reality, virtual reality, and the metaverse, he added.

The GAIN Summit, which is organized by the Saudi Data and AI Authority, is taking place until Sept. 12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman.

The summit is focusing on one of today’s most pressing global issues — AI technology — and aims to find solutions that maximize the potential of these transformative technologies for the benefit of humanity.


Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
Updated 10 September 2024
Follow

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears

Older generations more likely to fall for AI-generated fake news, Global AI Summit hears
  • Semafor co-founder Ben Smith says he is ‘much more worried about Gen X and older people’ falling for misinformation than younger generations

RIYADH: Media experts are concerned that older generations are more susceptible to AI-generated deep fakes and misinformation than younger people, the audience at the Global AI Summit in Riyadh heard on Tuesday.

“I am so much more worried about Gen X (those born between 1965 and 1980) and older people,” Semafor co-founder and editor-in-chief Ben Smith said during a panel titled “AI and the Future of Media: Threats and Opportunities.”

He added: “I think that young people, for better and for worse, really have learned to be skeptical, and to immediately be skeptical, of anything they’re presented with — of images, of videos, of claims — and to try to figure out where they’re getting it.”

Smith was joined during the discussion, moderated by Arab News Editor-in-Chief Faisal Abbas, by the vice president and editor-in-chief of CNN Arabic, Caroline Faraj, and Anthony Nakache, the managing director of Google MENA.

Semafor co-founder and editor-in-chief Ben Smith.

They said that AI, as a tool, is too important not to be properly regulated. In particular they highlighted its potential for verification of facts and content creation in the media industry, but said educating people about its uses is crucial.

“We have always been looking at how we can build AI in a very safe and responsible way,” said Nakache, who added that Google is working with governments and agencies to figure out the best way to go about this.

The integration of AI into journalism requires full transparency, the panelists agreed. Faraj said the technology offers a multifunctional tool that can be used for several purposes, including data verification, transcription and translation. But to ensure a report contains the full and balanced truth, a journalist will still always be needed to confirm the facts using their professional judgment.

The panelists also agreed that AI would not take important jobs from humans in the industry, as it is designed to complete repetitive manual tasks, freeing up more of a journalist’s time to interact with people and their environment.

“Are you really going to use AI go to a war zone and to the front line to cover stories? Of course not,” said Faraj.

Vice president and editor-in-chief of CNN Arabic, Caroline Faraj.

Smith, who has written a book on news sites and viral content, warned about the unethical ways in which some media outlets knowingly use AI-generated content because they “get addicted” to the traffic such content can generate.

All of the panelists said that educating people is the key to finding the best way forward regarding the role of AI in the media. Nakache said Google has so far trained 20,000 journalists in the region to better equip them with knowledge of how to use digital tools, and funds organizations in the region making innovative use of technology.

“It is a collective effort and we are taking our responsibility,” he added.

Anthony Nakache, the managing director of Google MENA.

The panelists also highlighted some of the methods that can be used to combat confusion and prevent misinformation related to the use of AI, including the use of digital watermarks and programs that can analyze content and inform users if it was AI-generated.

Asked how traditional media organizations can best teach their audiences how to navigate the flood of deep fakes and misinformation, while still delivering the kind of content they want, Faraj said: “You listen to them. We listen to our audience and we hear exactly what they wanted to do and how we can enable them.

“We enable them and equip them with the knowledge. Sometimes we offer training, sometimes we offer listening; but listening is a must before taking any action.”