- Mal terminal uses GPT-4 to generate ransomware or vice versa scalcode when driving
- LLM-enabled malware avoids detection by only creating malicious logic during execution
- Researchers found no evidence of implementation; probably a proof-of-concept or testing tool
Sentinelone cybersecurity scientists have revealed a new piece of malware using Openais Chatgpt-4 to generate malicious code in real time.
The researchers claim that malt terminal represents a significant change in how threat actors create and implement malicious code, and notice, “Incorporation of LLMs into malware marks a qualitative shift in opponents.”
“With the ability to generate malicious logic and commands on Runtime, LLM-enabled malware introduces new challenges for defenders.”
Imitation of the government
The discovery means that the cyber security community has a whole new malware category to fight against: LLM-activated malware or malware that embeds large language models directly in its functionality.
Essentially, malt terminal is a malware generator. When opponents bring it up, it asks if they want to create a ransomware encryption or a reverse shell. The prompt is then sent to GPT-4 AI, which responds with Python code tailored to the selected format.
Sentinelone said the code is not found in the malware file before Runtime and that it is instead generated dynamically. This makes detection from traditional security tools much more difficult as there is no static malicious code to scan.
In addition, they identified the GPT-4 integration after discovering python scripts and a Windows executable with hard-coded API keys and fast structures.
Since the API end point was used, killed by the end of 2023, Sentinelone concluded that malt terminal should be older than that, making it the earliest known example of AI-driven malware.
Fortunately, there is no evidence that malware was ever deployed in nature, so it could have simply been a proof-of-concept or a red team tool. Sentinelone believes that malt terminal is a sign of the upcoming things and called on the cyber security community to prepare accordingly:
“Although the use of LLM-enabled malware is still limited and largely experimental, this early stage of development provides an opportunity to learn from strikers’ errors and adjust their approaches accordingly,” the report adds.
“We expect opponents to adapt their strategies, and we hope that further research can build on the work we have presented here.”
Via Hacker the news



