AI use rising in influence campaigns online, but impact limited - US cyber firm
Views: 2965
2023-08-19 01:23
By Zeba Siddiqui SAN FRANCISCO (Reuters) -Google-owned U.S. cybersecurity firm Mandiant said on Thursday it had seen increasing use of

By Zeba Siddiqui

SAN FRANCISCO (Reuters) -Google-owned U.S. cybersecurity firm Mandiant said on Thursday it had seen increasing use of artificial intelligence (AI) to conduct manipulative information campaigns online in recent years, though the technology's use in other digital intrusions had been limited so far.

Researchers at the Virginia-based company found "numerous instances" since 2019 in which AI-generated content, such as fabricated profile pictures, had been used in politically-motivated online influence campaigns.

These included campaigns from groups aligned with the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador, the report said.

It comes amid a recent boom in generative AI models such as ChatGPT, which make it far easier to create convincing fake videos, images, text, and computer code. Security officials have warned of such models being used by cybercriminals.

Generative AI would enable groups with limited resources to produce higher quality content for influence campaigns at scale, Mandiant researchers said.

A pro-China information campaign named Dragonbridge, for instance, had expanded "exponentially" across 30 social platforms and 10 different languages since it first began by targetting pro-democracy protesters in Hong Kong in 2019, said Sandra Joyce, vice president at Mandiant Intelligence.

Yet, the impact of such campaigns was limited. "From an effectiveness standpoint, not a lot of wins there," she said. "They really haven't changed the course of the threat landscape just yet."

China has denied U.S. accusations of involvement in such influence campaigns in the past.

Mandiant, which helps public and private organisations respond to digital breaches, said it hadn't yet seen AI play a key role in threats from Russia, Iran, China, or North Korea. AI use for digital intrusions is expected to remain low in the near term, the researchers said.

"Thus far, we haven't seen a single incident response where AI played a role," said Joyce. "They haven't really been brought into any kind of practical usage that outweighs what could be done with normal tooling that we've seen."

But she added: "We can be very confident that this is going to be a problem that gets bigger over time."

(Reporting by Zeba Siddiqui in San Francisco; Editing by Alexandra Hudson)

Tags ai epus news usa threats