It’s Getting Darker: Nation-State APTs Employ Dark AI, Says Kaspersky Expert

Brace for more sophisticated and stealthy attacks driven by the rise of Dark artificial intelligence (AI) in APAC. This is among the key findings shared by global cybersecurity and digital privacy company Kaspersky.

Kaspersky no 1 Sergey Lozhkin, Head of Research Center of APAC and META at Kaspersky.jpg
Kaspersky no 1 Sergey Lozhkin, Head of Research Center of APAC and META at Kaspersky.jpg

“Since ChatGPT gained global popularity in 2023, we have observed several useful adoptions of AI, from mundane tasks like video creation to technical threat detections and analysis. In the same breath, bad actors are using it to enhance their attacking capabilities. We are entering an era in cybersecurity and in our society where AI is the shield and Dark AI is the sword,” says Sergey Lozhkin, Head of Global Research & Analysis Team (GReAT) for META and APAC at Kaspersky.

Dark AI refers to the local or remote deployment of non-restricted large language models (LLMs) within a full framework or chatbot system that is used for malicious, unethical, or unauthorised purposes. These systems operate outside standard safety, compliance, or governance controls, often enabling capabilities such as deception, manipulation, cyberattacks, or data abuse without oversight.

Dark AI in action

Lozhkin shared the most common and well-known malicious use of AI today comes in the form of Black Hat GPTs, which emerged as early as mid-2023. These are AI models that are intentionally built, modified, or used to perform unethical, illegal, or malicious activities such as generating malicious codes, crafting fluent and persuasive phishing emails for both mass and targeted attacks, creating voice and video deepfakes, and even supporting Red Team operations.

Black Hat GPTs can be or private or semi-private AI models. Known examples include WormGPT, DarkBard, FraudGPT, and Xanthorox, designed or adapted to support cybercrime, fraud, and malicious automation.

Aside from the typical dark uses of AI, Lozhkin revealed that Kaspersky experts are now observing a darker trend – nation-state actors leveraging LLMs in their campaigns.

“OpenAI recently revealed it has disrupted over 20 covert influence and cyber operations attempting to misuse its AI tools. We can expect threat actors to create more clever ways of weaponising generative AI operating in both public and private threat ecosystems. We should brace for it,” he explains.

Open AI’s report revealed that the malicious actors have used LLMs to craft convincing fake personas, respond in real-time to targets, and produce multilingual content designed to deceive victims and bypass traditional security filters.

“AI doesn’t inherently know right from wrong, it’s a tool that follows prompts. Even when safeguards are in place, we know APTs are persistent attackers. As dark AI tools become more accessible and capable, it’s crucial for organisations and individuals in Asia Pacific to strengthen cybersecurity hygiene, invest in threat detection powered by AI itself, and stay educated on how these technologies can be exploited,” Lozhkin adds.

To help organisations defend themselves against Dark AI and AI-enabled cyber threats, Kaspersky experts suggest:

To be updated on the latest threats using Dark AI, visit https://www.kaspersky.com/.

© 2024 BlackButterfly DesignArts