In a pair of blog entries published Wednesday, the United Kingdom’s National Cyber Security Centre (NCSC) stated that experts had not yet addressed the possible security issues associated with large language models, or LLMs.
The AI-powered tools are being used as chatbots, which some believe may replace not only internet searches but also customer care and sales calls.
According to the NCSC, this may pose concerns, especially if such models were integrated into other aspects of the organization’s business processes. Academics and researchers have regularly discovered ways to sabotage chatbots by giving them rogue orders or tricking them into ignoring their own built-in safeguards.
For example, if a hacker structures their query correctly, an AI-powered chatbot deployed by a bank could be misled into performing an unauthorized transaction.
“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC said in one of its blog entries, alluding to experimental software releases.
“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”
Authorities worldwide are contending with the rise of LLMs, such as OpenAI’s ChatGPT, which businesses are adopting into a variety of activities, including sales and customer service. The security ramifications of AI are also being discussed, with officials in the United States and Canada reporting that hackers have embraced the technology.
According to a recent Reuters/Ipsos study, many business employees use applications like ChatGPT to assist with simple tasks like composing emails, summarising documents, and conducting preliminary research.
Some 10% of those asked stated their employers specifically prohibited the use of external AI technologies, while a quarter were unsure if their organization allowed the technology to be used.
The race to integrate AI into business practices, according to Oseloka Obiora, chief technology officer at cybersecurity firm RiverSafe, will have “disastrous consequences” if business leaders fail to implement the appropriate checks.
“Instead of jumping into bed with the latest AI trends, senior executives should think again,” he warned. “Assess the benefits and risks as well as implement the necessary cyber protection to ensure the organization is safe from harm.”