sábado, fevereiro 1, 2025
HomeCyber SecurityGoogle says hackers abuse Gemini AI to empower their attacks

Google says hackers abuse Gemini AI to empower their attacks


Google says hackers abuse Gemini AI to empower their attacks

Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets.

Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses.

Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period.

Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.

Among the most common cases were assistance with coding tasks for developing tools and scripts, research on publicly disclosed vulnerabilities, checking on technologies (explanations, translation), finding details on target organizations, and searching for methods to evade detection, escalate privileges, or run internal reconnaissance in a compromised network.

APTs using Gemini

Google says APTs from Iran, China, North Korea, and Russia, have all experimented with Gemini, exploring the tool’s potential in helping them discover security gaps, evade detection, and plan their post-compromise activities. These are summarized as follows:

  • Iranian threat actors were the heaviest users of Gemini, leveraging it for a wide range of activities, including reconnaissance on defense organizations and international experts, research into publicly known vulnerabilities, development of phishing campaigns, and content creation for influence operations. They also used Gemini for translation and technical explanations related to cybersecurity and military technologies, including unmanned aerial vehicles (UAVs) and missile defense systems.
  • China-backed threat actors primarily utilized Gemini for reconnaissance on U.S. military and government organizations, vulnerability research, scripting for lateral movement and privilege escalation, and post-compromise activities such as evading detection and maintaining persistence in networks. They also explored ways to access Microsoft Exchange using password hashes and reverse-engineer security tools like Carbon Black EDR.
  • North Korean APTs used Gemini to support multiple phases of the attack lifecycle, including researching free hosting providers, conducting reconnaissance on target organizations, and assisting with malware development and evasion techniques. A significant portion of their activity focused on North Korea’s clandestine IT worker scheme, using Gemini to draft job applications, cover letters, and proposals to secure employment at Western companies under false identities.
  • Russian threat actors had minimal engagement with Gemini, most usage being focused on scripting assistance, translation, and payload crafting. Their activity included rewriting publicly available malware into different programming languages, adding encryption functionality to malicious code, and understanding how specific pieces of public malware function. The limited use may indicate that Russian actors prefer AI models developed within Russia or are avoiding Western AI platforms for operational security reasons.

Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.

OpenAI, the creator of the popular AI chatbot ChatGPT, made a similar disclosure in October 2024, so Google’s latest report comes as a confirmation of the large-scale misuse of generative AI tools by threat actors of all levels.

While jailbreaks and security bypasses are a concern in mainstream AI products, the AI market is gradually filling with AI models that lack proper the protections to prevent abuse. Unfortunately, some of them with restrictions that are trivial to bypass are also enjoying increased popularity.

Cybersecurity intelligence firm KELA has recently published the details about the lax security measures for DeepSeek R1 and Alibaba’s Qwen 2.5, which are vulnerable to prompt injection attacks that could streamline malicious use.

Unit 42 researchers also demonstrated effective jailbreaking techniques against DeepSeek R1 and V3, showing that the models are easy to abuse for nefarious purposes.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments