Meta says Iranian group tried to target Trump, Biden officials’ WhatsApp accounts

Republican presidential nominee former President Donald Trump is seen through a window in a door before speaking at a campaign event at ll Toro E La Capra, Friday, Aug. 23, 2024, in Las Vegas. (AP)
Republican presidential nominee former President Donald Trump is seen through a window in a door before speaking at a campaign event at ll Toro E La Capra, Friday, Aug. 23, 2024, in Las Vegas. (AP)
Short Url
Updated 24 August 2024
Follow

Meta says Iranian group tried to target Trump, Biden officials’ WhatsApp accounts

Meta says Iranian group tried to target Trump, Biden officials’ WhatsApp accounts
  • Meta attributed the activity to APT42, a hacking group widely believed to be associated with an intelligence division inside Iran’s military that is known for placing surveillance software on the mobile phones of its victims

WASHINGTON: Meta said on Friday it had identified possible hacking attempts on the WhatsApp accounts of US officials from the administrations of both President Joe Biden and former President Donald Trump, blaming the same Iranian hacker group revealed earlier this month to have compromised the Trump campaign.
In a blog post, the parent company of Facebook, Instagram and WhatsApp described the attempt as a “small cluster of likely social engineering activity on WhatsApp” involving accounts posing as technical support for AOL, Google, Yahoo and Microsoft.
It blocked the accounts after users reported the activity as suspicious and had not seen any evidence suggesting the targeted WhatsApp accounts had been compromised, it said.
Meta attributed the activity to APT42, a hacking group widely believed to be associated with an intelligence division inside Iran’s military that is known for placing surveillance software on the mobile phones of its victims. The software enables the team to record calls, steal text messages and silently turn on cameras and microphones, according to researchers who follow the group.
It linked the group’s activity to efforts to breach US presidential campaigns reported by Microsoft and Google earlier this month, ahead of the US presidential election in November.
The company’s blog post did not name the individuals targeted, saying only that the hackers “appeared to have focused on political and diplomatic officials, business and other public figures, including some associated with administrations of President Biden and former President Trump.”
Those figures were based in Israel, the Palestinian territories, Iran, the United States and the United Kingdom, it added.

 


The future tech helping to uncover hidden secrets of Saudi Arabia’s past

The future tech helping to uncover hidden secrets of Saudi Arabia’s past
Updated 1 min ago
Follow

The future tech helping to uncover hidden secrets of Saudi Arabia’s past

The future tech helping to uncover hidden secrets of Saudi Arabia’s past
  • Researchers at KAUST are developing AI models to help archaeologists and researchers in many other academic fields

RIYADH: Far from fearing a future powered by AI, researchers at King Abdulah University for Science and Technology are using it to uncover long-hidden secrets about Saudi Arabia’s past.

Prof. Bernard Ghanem, a specialist in computer vision and machine learning, said that in particular, AI is helping to discover archaeological sites that have yet to be unearthed.

“AI has applications in every part of our lives: analyzing the present, the future as well as the past,” Ghanem told Arab News.

His team at KAUST has trained AI models, using satellite data and images of known historical sites, to assist them in the identification of undiscovered sites across the country, he said. The resultant findings have fueled further archaeological research and are helping to preserve the Kingdom's rich cultural heritage.

However, archaeology is just one of the many areas of study in which Ghanem’s team is exploring the potential benefits of AI technology.

At the Image and Video Understanding Lab, for example, researchers are focusing on four main applications of AI, mostly rooted in machine learning, a branch of AI in which systems use existing data to help them solve problems using statistics and algorithms.

The first involves building machine-learning models specifically for use with video to harness the popularity and power of streaming.

“Video is the biggest big data out there; more than 80 percent of the internet traffic that we see is because of video,” said Ghanem, whose team is developing tools to analyze, retrieve, and even create videos, thereby leveraging the ubiquity of video in new AI applications.

The second application, which uses machine learning and deep learning to aid automation, is investigating the ways in which two-dimensional simulation data can be translated into the 3D world, with potential applications in gaming, robotics and other real-world scenarios.

“How do you, for example, play a game in the simulated world and then have that … work in the real world?” Ghanem said.

The third is exploring the foundations of machine learning, with a focus on identifying weaknesses in generative AI models and finding ways to improve them and prevent failures.

Ghanem compared this process to building immunity, whereby the AI models are deliberately “broken” to help understand vulnerabilities so that can be addressed and the models strengthened.

The fourth application involved the use of AI for science, specifically its use in efforts to advance chemical research.

Ghanem said his team is developing AI models able to act as virtual chemistry assistants by predicting the properties of molecules and perhaps discovering new compounds. Such innovations, he added, could play a critical role in the study and research of topics such as catalysis and direct air capture, thereby boosting efforts to combat climate change.

Ghanem also highlighted the environmental potential of AI, and the new Center of Excellence for Generative AI at KAUST, which he chairs. The center, which is due to open on Sunday, will explore four key pillars of research relating to: health and wellness; sustainability; energy and industrial leadership; and future economies.

“That’s where we’re going to focus on GenAI methods for sustainability,” Ghanem said.


Company transparency in the spotlight at Global AI Summit 

Company transparency in the spotlight at Global AI Summit 
Updated 44 min 5 sec ago
Follow

Company transparency in the spotlight at Global AI Summit 

Company transparency in the spotlight at Global AI Summit 
  • ‘Decision-making points must remain human,’ says PwC executive

RIYADH: The importance of transparency and responsibility when using artificial intelligence came under scrutiny at the Global AI Summit.

Ali Hosseini, chief technology officer for PWC Middle East, told Arab News the consultancy company had created “Responsible AI” — an approach to managing risks associated with AI-based solutions. 

The initiative gave customers a clear picture of how the company used their data, he said.

“We take this to customers and we actually share the experience in terms of how we’re using it internally. So there are a number of areas in terms of general education for the employees and (in) what kind of cases they can use AI and in what kind of context they can depend on the output,” he explained.

Hosseini said companies thinking of implementing AI must ensure “the empowerment of the employees, self-responsibility, and AI use.” 

“We give (employees) the right tools coming from the right kind of credible sources to use on the day-to-day automation of tasks or augmenting their knowledge,” he told Arab News, concluding the interview with a key takeaway.

“(There is) a level of self-responsibility that people need to basically take an education in order to use AI … We advise the organization to use (AI) as giving you a basically augmented decision, but not the full decision … The decision-making point is always the human, not AI.” 

Ali Hosseini, chief technology officer for PWC Middle East

In a panel discussion at the summit, Priya Nagpurkar, Vice President of the Hybrid Cloud and AI Platform at IBM Research, said AI was created to “enhance and support human capacity, intelligence and expertise and not to replace it, and do so in a very transparent, explainable and responsible way.”

IBM has created watsonx.governance, an AI and data platform which monitors, directs and manages organizations’ AI activities.

Similar to a “nutrition label,” IBM creates factsheets which document AI model metadata across its lifecycle.

“These factsheets are a way of extracting the key facts that went into the data curation in that part of the lifecycle,” she explained. “(A) concrete example is, let’s say you are building an AI application to look at loan applications. The type of facts you want to know about the model you want to use are if there was bias in the data that went to training that model, was it evaluated? And was there a range of variation?”  

The GAIN Summit, organized by the Saudi Data and AI Authority, takes place from Sept. 10-12 at the King Abdulaziz International Conference Center, under the patronage of Crown Prince Mohammed bin Salman. 


Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
Updated 11 September 2024
Follow

Arabic language AI models will improve output of developers in region, says executive

Arabic language AI models will improve output of developers in region, says executive
  • Saudi-developed ALLaM model will be hosted on Microsoft’s Azure platform

RIYADH: Arabic large language models, or ALLaM, will boost the regional capabilities of artificial intelligence and improve productivity for app developers, according to a Microsoft executive.

His comments came after the announcement that the Saudi-developed ALLaM would be hosted on Microsoft’s Azure platform.

“For Arab developers and people who are developing applications in the Arabic(-speaking) world, there will be a fidelity and an improvement of the operational output that would not come from using some of the other language models,” Anthony Cook, deputy general counsel at Microsoft, told Arab News on the sidelines of the Global AI Summit in Riyadh on Wednesday.

Localized language models like ALLaM are “really the way to release the opportunity of AI much more broadly,” Cook explained.

“I think one of the things we’re focused on as a company is making sure that there is a range of models that are available on the Azure platform that really then meet the different social and business opportunities that exist.”

ALLaM was developed by the Saudi Data and Artificial Intelligence Authority with the intention of enhancing Arabic language AI services and inspiring innovation within the field across Saudi Arabia and internationally.

According to the Arabic massive multi-task language understanding — a standardized test to assess AI performance — ALLaM secured first place in its category.

The language model was developed within the National Center for AI and is built and trained on Microsoft Azure’s robust infrastructure.

The decision to have ALLaM available on Azure emphasizes its advanced capabilities in understanding and generating Arabic content across multiple channels, according to the announcement.

Cook went on to describe the “tremendous work that was put into developing the ALLaM large language model,” saying that “it will have a fidelity that will enable services to be delivered and applications to be built leveraging the large language model, which we’re very excited about.”

Dr. Mishari Al-Mishari, deputy director of SDAIA, said in a statement: “ALLaM represents a significant milestone in our journey towards AI excellence.

“With the general availability on Azure, we are not only expanding access to this powerful language model and advancing AI innovation, but also ensuring that the Arabic language and culture are deeply embedded in this technological evolution.

“Our collaboration with Microsoft marks a significant step forward in our journey to empower government institutions in the Kingdom to effectively leverage the latest advancements in generative AI to enhance public services and improve the quality of life for all.”

Turki Badhris, president of Microsoft Arabia, said that this is a landmark moment in the region and that they are “thrilled to be working alongside our partners at SDAIA to provide a robust platform that supports the development and deployment of advanced AI models tailored to the Arabic language and culture.

“Together, we are paving the way for a new era of AI advancements, collaborations and empowerment in the Kingdom and beyond.”

Badhris also said the AI transformation will help people, nongovernmental organizations, and businesses in all industries to unlock their full potential.

The collaboration between SDAIA and Microsoft also includes the establishment of a center of excellence to expedite the development of AI solutions and the launch of a Microsoft AI academy aimed at harnessing national talent and broadening expertise in the AI sector.

“I think the part that the Kingdom is doing very well is that marriage of aspiration, having a body that can actually orchestrate and implement that across government, and then at the same time, learning from what is going on elsewhere, but adapting that very specifically to what is most important and most relevant in Saudi,” Cook said.

“When I look at AI, one of the parts that is really important is to build confidence that the technology is being used in responsible ways.

“That’s something at Microsoft that we’ve focused on really from the very start of AI and have accelerated our work as generative AI became so prevalent.

“The Kingdom also has done a great job in this. You know, they’ve set out, through SDAIA’s work, the work around ethical principles.

“And the ethical principles underline the way in which the true ethical considerations can be then actually implemented into the practices that are responsible for the development of the technology.”

The GAIN Summit, currently in its third edition, is running from Sept. 10-12 at Riyadh’s King Abdulaziz International Conference Center.


Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
Updated 11 September 2024
Follow

Sudanese rebel fighters post war crime videos on social media

Sudanese rebel fighters post war crime videos on social media
  • Videos show Rapid Support Forces members glorifying destruction, torturing captives
  • Footage could provide evidence for future accountability, says expert

LONDON: Rebel fighters from the Sudanese Rapid Support Forces have posted videos on social media that document their involvement in war crimes, according to a recent report by UK-based newspaper The Guardian.

The footage, which has been verified by the independent non-profit organization Centre for Information Resilience shows fighters destroying properties, burning homes and torturing prisoners.

The films could serve as key evidence in potential war crime prosecutions by international courts.

Alexa Koenig, co-developer of the Berkeley Protocol, which sets stands for social media use in war crime investigations, told The Guardian: “It’s someone condemning themselves. It’s not the same as a guilty plea but in some ways, it is a big piece of the puzzle that war crimes investigators have to put together.”

The RSF has been locked in conflict with the Sudanese military since April 2023, bringing the country to the brink of collapse.

Some estimates suggest there have been up to 150,000 civilian casualties, with 12 million people displaced. This would make Sudan the country with the highest internal displacement rate in the world, according to the UN.

In Darfur’s El Geneina, more than 10,000 people — mostly Masalit — were killed in 2023 during intense fighting. Mass graves, allegedly dug by RSF fighters, were discovered by a UN investigation.

One video posted on X by a pro-RSF account showed a fighter in front of the Masalit sultan’s house declaring: “There are no more Masalit … Arabs only.”

Other footage features fighters walking through streets lined with bodies, which they call “roadblocks,” and scenes of captives being abused and mocked. Some even took selfies with their victims.

The videos offer rare glimpses into the atrocities happening in Sudan, a region largely inaccessible to journalists and NGOs.

In August, Human Rights Watch accused both sides in Sudan’s ongoing conflict of committing war crimes, including summary executions and torture, after analyzing similar social media content.


Australia considering banning children from using social media

Australia considering banning children from using social media
Updated 11 September 2024
Follow

Australia considering banning children from using social media

Australia considering banning children from using social media
  • Australia is the latest country to take action against these platforms
  • Experts voiced concerns ban could fuel underground online activity

LONDON: The Australian government announced Tuesday it is considering banning children from using social media, in a move aimed at protecting young people from harmful online content.

The legislation, expected to pass by the end of the year, has yet to determine the exact age limit, though Prime Minister Anthony Albanese suggested it could be between 14 and 16 years.

“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts,” Albanese told the Australian Broadcasting Corp.

“We want them to have real experiences with real people because we know that social media is causing social harm,” he added, calling the impact a “scourge.”

Several countries in the Asia-Pacific region, including Malaysia, Singapore, and Pakistan, have recently taken action against social media platforms, citing concerns over addictive behavior, bullying, gambling, and cybercrime.

Introducing this legislation has been a key priority for the current Australian government. Albanese highlighted the need for a reliable age verification system before a final decision is made.

The proposal has sparked debate, with digital rights advocates warning that such restrictions might push younger users toward more dangerous, hidden online activity.

Experts voiced concerns during a Parliamentary hearing that the ban could inadvertently harm children by encouraging them to conceal their internet usage.

Meta, the parent company of Facebook and Instagram, which currently enforces a self-imposed minimum age of 13, said it aims to empower young people to benefit from its platforms while providing parents with the necessary tools to support them, rather than “just cutting off access.”