- Google publishes a new report on how criminals abuse Gemini
- Attackers from Iran, North Korea, Russia and other places were mentioned
- Hackers are experimenting but have not yet found “new capabilities”
Dozens of cyber criminal organizations from around the world abuse Google’s artificial intelligence (AI) solution Gemini In their attacks, the company has admitted.
In an in -depth analysis that discusses who threat actors are and what they use the tools for, Google’s threat information group highlighted how the platform has not yet been used to detect new attack methods, but is rather used to fine -tune existing ones.
“Threateners are experimenting with Gemini to enable their operations, find productivity gains, but not yet develop new capabilities,” the team said in its analysis. “At present, they primarily use AI for research, troubleshooting code and creation and location of content.”
APT42 and many other threats
The biggest gemi users among cyber criminals are the Iranians, the Russians, the Chinese and the North Koreans who use the platform for reconnaissance, vulnerability research, scripting and development, translation and explanation and deeper system access and post-comrade.
In total, Google observed 57 groups, more than 20 of which were from China, and among the 10+ North Korean threat actors who used Gemini, a group stands out – APT42.
Over 30% of the threat actor Gemini use from the country was linked to APT42, Google said. “APT42’s gemini activity reflected the group’s focus on designing successful phishing campaigns. We observed the group using Gemini to conduct reconnaissance to individual policy and defense experts as well as organizations of interest in the group. “
APT42 also used text generation and editing features to create phishing messages, especially those targeting US defense organizations. “APT42 also utilized Gemini for translation including location or tailoring content to a local audience. This includes content tailored to local culture and local language, such as asking that translations should be fluent English. “
Ever since Chatgpt was first published, security researchers have warned about the abuse of cybercrime. Before Genai was the best way to see phishing -attack to look for spelling and grammar errors and inconsistent wording. Now, with AI doing the writing and editing, the method does not work practically no longer, and Security Profes turns to new approaches.