Categories
news

AI-powered cyber crime on the rise: Check Point report


Cyber security firm Check Point Software Technologies says cyber criminals are weaponising artificial intelligence.

Check Point Software Technologies has launched its inaugural AI Security Report, exposing the rise of AI-powered cyber crime and defences.

According to the cyber security firm, the report offers an in-depth exploration of how cyber criminals are weaponising artificial intelligence (AI), alongside strategic insights for defenders to stay ahead.

In a statement, Check Point said as AI reshapes industries, it has also erased the lines between truth and deception in the digital world. Cyber criminals now wield generative AI and large language models (LLMs) to obliterate trust in digital identity.

“AI-powered impersonation bypasses even the most sophisticated identity verification systems, making anyone a potential victim of deception,” according to the statement.

Lotem Finkelstein, director of Check Point Research, said the swift adoption of AI by cyber criminals is already reshaping the threat landscape.

“While some underground services have become more advanced, all signs point toward an imminent shift – the rise of digital twins. These aren’t just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It’s not a distant future – it’s just around the corner.”

Key threat insights from the AI security report

According to the cyber security firm, at the heart of these developments is AI’s ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake. The report uncovers four core areas where this erosion of trust is most visible: AI-enhanced impersonation and social engineering, LLM data poisoning and disinformation, AI-created malware and data mining, and weaponisation and hijacking of AI models.

“In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly,” added Finkelstein. 

Leave a Reply

Your email address will not be published. Required fields are marked *