UK cybersecurity centre issues warning over use of AI chatbots
British national security officials have warned organisations to take care when integrating artificial intelligence (AI) chatbots into their systems due to their susceptibility to manipulation and cyber risks.
In a double blog post on Wednesday, the National Cyber Security Centre (NCSC) said all sectors should “exercise caution” when using large language models (LLMs), algorithms which use generative AI to produce human-like text responses such as OpenAI’s ChatGPT.
One post said there is “understandable” hype around AI, however “the global tech community still doesn‘t yet fully understand LLMs’ capabilities, weaknesses, and, crucially, vulnerabilities”.
AI-powered chatbots are often used in internet searches, customer service and sales, but they are not immune to misuse.
For example, NCSC’s research shows hackers can trick the AI models into performing unauthorised actions, such as making fraudulent payments, generating offensive content and revealing or corrupting confidential data.
“The warning from the NCSC that hackers can manipulate chatbots needs to be taken seriously by everyone,” said Dan Schiappa, chief product officer at cybersecurity firm Arctic Wolf.
Although LLMs have helped small and medium-sized businesses take some of the workload off their shoulders, they often lack the digital skills to ensure the technology is safe and secure.
Research from software company Xero found that almost half of UK small businesses trust AI with identifiable customer information, while 40 per cent would share sensitive commercial information.
“If hackers use this new method to expose data it could have serious consequences for not only the business, but also its customers,” Schiappa explained.
Cyber expert Oseloka Obiora, chief technology officer at Riversafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks.
“Instead of jumping into bed with the latest AI trends, senior executives should think again, assess the benefits and risks as well as implementing the necessary cyber protection to ensure the organisation is safe from harm,” he added.
Recent research from tech consultancy Slalom shows that, although 84 per cent of businesses trust AI tools, 45 per cent view data privacy risks as the biggest negative impact of AI.
“To overcome this, we need a common global strategy with a consistent approach to regulation,” said Dave Williams, president of Slalom UK & Ireland.
“The public and businesses need assurance methods so that the content used to build the AI models can be considered trustworthy and secure.”
He suggested this could be achieved at the UK’s AI Safety Summit, set to be held this November, which aims to discuss how international coordination can mitigate the risks of AI.
“The Summit should be used as a catalyst for businesses to acknowledge the importance of utilising AI across the whole of their organisation, not just in tech silos.
“This approach to AI is what will set UK-based businesses apart from competitors across the globe and help realise the UK’s vision of being home to the transformative technology of the future.”