Google leverages AI to combat threats in digital advertising

As the world increasingly relies on digital platforms for information and commerce, Google is doubling down on efforts to maintain a safe and trustworthy online advertising ecosystem. In its recently released 2023 Ads Safety Report, the tech giant outlined how it is harnessing the power of artificial intelligence, particularly large language models (LLMs), to tackle emerging threats and bad actors.

“The key trend in 2023 was the impact of generative AI,” stated Duncan Lennox, VP and GM of Ads Privacy and Safety at Google. “This new technology introduced significant and exciting changes to the digital advertising industry, from performance optimisation to image editing.”

While acknowledging the challenges posed by generative AI, Lennox emphasised Google’s commitment to addressing them head-on. “Our teams are embracing this transformative technology, specifically Large Language Models (LLMs), so that we can better keep people safe online,” he said.

Traditional machine learning models have proven effective in detecting and blocking billions of bad ads before they reach users. However, these models often require extensive training on vast datasets. LLMs, on the other hand, can rapidly review and interpret content at a high volume while capturing nuances that traditional models may miss.

According to the report, LLMs have already enabled larger-scale and more precise enforcement decisions on complex policies, such as those targeting unreliable financial claims and get-rich-quick schemes. “LLMs are more capable of quickly recognizing new trends in financial services, identifying the patterns of bad actors who are abusing those trends and distinguishing a legitimate business from a get-rich-quick scam,” the report stated.

In its ongoing battle against fraud and scams, Google introduced the Limited Ads Serving policy in November 2023. This policy aims to protect users by limiting the reach of advertisers with whom Google is less familiar until they establish a track record of good behaviour.

The report also highlighted Google’s rapid response to a targeted campaign featuring the likenesses of public figures, often through deep fakes, to scam users. A dedicated team was formed to pinpoint patterns in bad actors’ behaviour, train enforcement models, and update misrepresentation policies to better enable account suspensions.

Overall, in 2023, Google blocked or removed 206.5 million advertisements for violating its misrepresentation policy, 273.4 million for violating its financial services policy, and over 1 billion for abusing the ad network, including promoting malware.

As the fight against scam ads intensifies, Google is collaborating with organisations like the Global Anti-Scam Alliance and Stop Scams UK to facilitate information sharing and protect consumers worldwide.

The report also emphasised Google’s efforts to ensure the integrity of election ads, verifying more than 5,000 new election advertisers and removing over 7.3 million election ads from unverified sources in 2023.

Looking ahead, Google recognises the need for continuous adaptation, stating, “Though we don’t yet know what the rest of 2024 has in store for us, we are confident that our investments in policy, detection, and enforcement will prepare us for any challenges ahead.”

Read next: Google lifts veil on AI First accelerator for African start-ups

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Ventureburn

Sign up to our newsletter to get the latest in digital insights.