Google claims AI helped block 99% of malicious ads before release

The issue is particularly relevant in the Middle East, where fast-growing digital markets and heavy reliance on mobile advertising make users and businesses vulnerable to scam campaigns that can spread quickly across borders. (Supplied)
The issue is particularly relevant in the Middle East, where fast-growing digital markets and heavy reliance on mobile advertising make users and businesses vulnerable to scam campaigns that can spread quickly across borders. (Supplied)
Short Url
Updated 1 min 20 sec ago
Follow

Google claims AI helped block 99% of malicious ads before release

Google claims AI helped block 99% of malicious ads before release
  • Company says it used combination of human review, AI to prevent fraudulent, policy-violating ads from being seen by users
  • Gemini-powered tools part of efforts to detect fraud, enforce advertising rules at scale

LONDON: Google said on Thursday that it blocked 99 percent of malicious ads before they were seen by users, as it steps up the use of Gemini-powered tools to detect fraud and enforce advertising rules at scale.

The company said in its 2025 Ads Safety Report that it used a combination of human review and artificial intelligence to prevent fraudulent and policy-violating ads from reaching its platforms.

The tech giant said: “Of the 8.3 billion ads we blocked or removed, we stopped over 99 percent before they were ever seen by anyone.

“Gemini also helped us act on four times as many user reports as the year before, helping us address remaining threats faster.”

Of the ads removed, 15.5 percent, or 1.29 billion, were tied to abuse of the ad network. Other removals included 755 million ads for personalization violations, 646.7 million for legal compliance failures and 421.5 million for misrepresentation, a category that covers deceptive or misleading ads.

For users, the threat can be immediate. A fake investment offer, a spoofed brand campaign or a fraudulent shopping promotion can reach people in seconds, often before they have any reason to doubt what they are seeing.

Google said its AI systems were designed to catch those ads before they appeared, rather than after the damage had already been done.

The company said it had suspended nearly 24.9 million advertiser accounts over the past year and restricted or blocked 480 million web pages, while taking action against 245,000 publisher sites.

Most publisher policy violations — about 85 percent — involved sexual content, underscoring the challenge of policing harmful material online as generative AI makes deceptive content easier to produce.

Keerat Sharma, vice president and general manager of ads privacy and safety at Google, said: “Bad actors are using generative AI to create deceptive ads at scale, and Gemini helps us detect and block them in real time.

“Our teams have long used advanced AI to identify and stop scammers, and Gemini takes that work even further. Our models analyze hundreds of billions of signals — including account age, behavioral cues and campaign patterns — to stop threats before they reach people.”

Google said its newer models go beyond keyword-based systems by better understanding intent, allowing them to flag malicious content even when it is designed to evade detection.

Human review remained central, the company added, even as AI now handled the bulk of enforcement.

Google updated its policies 35 times in 2025 as it sought to stay ahead of scammers and protect legitimate advertisers on its platforms.

The issue is particularly relevant in the Middle East, where fast-growing digital markets and heavy reliance on mobile advertising make users and businesses vulnerable to scam campaigns that can spread quickly across borders.

A cross-border scam in 2021 spread across 16 Arabic-speaking countries, including Saudi Arabia, using fake prize draws, celebrity bait and bogus job offers, with investigators finding more than 4,300 malicious web pages behind the campaign.

More recently, cybersecurity firm Group-IB said it had uncovered a coordinated wave of fake online job advertisements that promised easy income for simple online tasks, targeting countries across the Arab region.

Google said the outcome of embedding Gemini into its ad-safety systems — including faster handling of user feedback and an 80 percent reduction in incorrect advertiser suspensions — had resulted in plans to expand the capability to more ad formats in 2026.