Data attacks set to enter new era under ‘FraudGPT’, warn cybersecurity execs
A new breed of malicious AI models are “heralding an era of AI-enabled data attacks” on businesses, Darktrace VP of strategic cyber AI Nicole Carignan has warned.
Speaking to City A.M., Carnigan said data attacks will become “faster and harder to defend against in the next few years” as new AI systems make attacks more sophisticated.
Sold on the dark web, AI systems such as ‘FraudGPT’ and ‘WormGPT’ are designed to help cyber criminals write phishing emails, plan cyber attacks and craft malicious code with ease.
“They enable adversaries who are much less sophisticated to perform much more sophisticated attacks,” explained Carignan, who boasts 25 years of experience in cyber security.
A study by British cyber security company Darktrace in April revealed a 135 per cent surge in phishing emails from January to February, coinciding with the widespread adoption of Chat GPT.
Darktrace said global cross-industry professionals are witnessing an “uncontrollable rise” in fraudulent emails, with nearly one-third falling victim to phishing attempts.
At the current rate of growth, global damage from cyberattacks is expected to amount to $10.5 trillion (£8.2 trillion) annually by 2025, according to a McKinsey survey.
Generative AI has helped these scams become indistinguishable from genuine communication, and Carignan said they are now “more sophisticated, at speed and at scale”.
AI regulation must strike a balance
While AI regulations are still being developed, Kunal Anand, chief of technology at cybersecurity solution company Imperva, argued that data regulations “need more teeth”.
Currently, UK companies can be fined up to four per cent of their annual revenue for mishandling data.
However, speaking to City A.M., Anand said “if there are data breaches, companies should be penalised” with severe financial penalties, especially when crucial organisational and customer data is at stake.
At the moment, he said “it takes a hack to get people to care”.
It comes after recent major breach involving a vulnerability in MOVEit software exposed household names like Boots, British Airways and the BBC to huge data theft.
The attack, thought to have been carried out by the Russian ransomware group Clop, was deemed “surprisingly simple” by experts.
AI-driven data attacks pose a significant financial and reputational risk to businesses. The global average cost of data breaches climbed to $4.45m (£3.48m) in 2023, up 15 per cent in three years, according to technology powerhouse IBM.
In the face of the escalating cyber onslaught, businesses must prioritise data security and take proactive measures to address the threats, said Malcolm Ross, deputy CTO of software company Appian.
He told City A.M. that many organisations are “jumping on the AI bandwagon” without fully understanding how it works and sacrificing important data.
Ross explained: “If you don’t want a human to see it, don’t show it to AI either because it will remember it, and you can’t make it forget.
“Organisations need to have a comprehensive understanding of their own data if they want to remain in control of it and avoid breaches.”