Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

How Vietnamese Hackers Trick Users with Fake AI Sites

How Vietnamese Hackers Trick Users with Fake AI Sites How Vietnamese Hackers Trick Users with Fake AI Sites
IMAGE CREDITS: SOPA IMAGES

A Vietnamese hacking group is weaponizing the growing buzz around AI tools to spread dangerous malware. According to cybersecurity firm Mandiant, a threat actor tracked as UNC6032 has been quietly running a massive campaign using fake AI video generator websites to trick users into downloading malware.

Over the past year, this group has created dozens of counterfeit sites mimicking real AI tools like Luma AI, Canva Dream Lab, and Kling AI. These lookalike platforms claim to offer AI-powered text-to-video or image-to-video features. But once visitors try to use the tools, they’re presented with a download prompt that delivers a booby-trapped ZIP archive.

What users don’t realize is that this download is far from harmless.

Fake AI Sites Reach Millions Through Facebook and LinkedIn

UNC6032’s strategy hinges on social engineering at scale. Mandiant discovered more than 30 fake websites promoted through over 120 deceptive ads. Most of these ran on Facebook, using a mix of attacker-controlled pages and hijacked user accounts. LinkedIn and possibly other platforms also hosted similar campaigns.

In total, the campaign has reportedly reached millions, with more than 2.3 million users in the EU alone being exposed to the malicious ads.

Meta did begin removing some of the malicious content in 2024, but not before many users were tricked into visiting the fake sites and downloading malware.

How the Malware Works

The infection chain observed by Mandiant is technically advanced. Once the victim downloads the file, a seemingly innocent double-extension executable kicks off a chain reaction. First, it drops a Rust-based malware loader known as Starkveil. That’s followed by the Coilhatch launcher, which activates two .NET-based backdoors: XWorm and Frostrift.

These backdoors are designed to steal system data — including usernames, OS versions, hardware IDs, and installed antivirus tools. XWorm can also log keystrokes, while Frostrift searches for installed messaging apps and browser extensions.

In some cases, the attackers also deployed a malware known as Noodlophile Stealer, sometimes bundled with XWorm, as noted in a related report by Morphisec.

The campaign makes heavy use of techniques like DLL side-loading, process injection, in-memory execution, and AutoRun registry key abuse to evade detection and remain persistent on infected systems.

AI Hype Becomes a Weapon

The rapid rise of AI tools has created new attack surfaces for cybercriminals. Mandiant emphasized that these threats aren’t just aimed at designers or developers anymore. Thanks to social media ads, almost anyone could become a victim.

The firm warned users to be especially cautious when engaging with AI-related tools online. A convincing ad or realistic-looking site doesn’t guarantee safety. Before interacting with these tools or downloading anything, it’s crucial to double-check the website’s domain and legitimacy.

As interest in AI continues to explode, attackers are sure to keep exploiting the trend for malicious gain. Being skeptical — especially when something looks too good or too slick to be true — is now more important than ever.

Share with others