A new threat intelligence report released by Google Threat Intelligence Group (GTIG) has raised serious concerns over the accelerating use of generative artificial intelligence in cyberattacks. The report highlights that cybercriminal groups have successfully developed a working zero-day exploit with AI assistance, marking a significant shift in how vulnerabilities are discovered and weaponized.
According to the findings, attackers used artificial intelligence tools to create a Python-based exploit capable of bypassing two-factor authentication in a widely used open-source web administration system. Security researchers note that this represents one of the first documented cases where AI played a direct role in building a functional zero-day exploit.
FCRF Academy Launches Premier Anti-Money Laundering Certification Program
AI-Assisted Exploit Targets 2FA Logic
GTIG’s Q2 2026 analysis reveals that the exploit was not based on traditional coding flaws such as memory corruption or input validation errors. Instead, it targeted a higher-level semantic vulnerability, a flawed trust assumption in the system’s 2FA enforcement logic.
This type of vulnerability is particularly difficult to detect using conventional security tools like static application security testing or fuzzing frameworks, as it exists in the design logic rather than the code structure.
Researchers identified multiple indicators suggesting AI involvement in the exploit’s development, including overly structured Python code, instructional-style docstrings, and an incorrectly generated CVSS severity score, traits commonly associated with large language model outputs.
The report also highlights increasing adoption of AI tools by advanced persistent threat groups linked to multiple countries. Several state-affiliated and cybercriminal actors have been observed using generative AI to accelerate vulnerability research, malware development, and exploitation planning.
One group, identified as UNC2814, reportedly used “persona-based jailbreaking” techniques to trick AI models into acting as senior security researchers for firmware analysis. Another group, APT45, was found automating large-scale prompt-based analysis of vulnerability databases, enabling rapid proof-of-concept development for potential exploits.
APT27 was also observed leveraging AI tools to build infrastructure management systems designed to obscure attack origins through configurable routing parameters and obfuscation logic.
PROMPTSPY Malware Uses AI During Execution
One of the most alarming discoveries involves an Android malware strain named PROMPTSPY, which integrates directly with a generative AI API during execution. The malware’s “GeminiAutomationAgent” module reportedly converts the victim device’s user interface into structured data and sends it to an AI model, which then returns automated commands such as taps, swipes, and navigation instructions.
This allows the malware to operate semi-autonomously on infected devices, performing actions without user awareness. PROMPTSPY is also capable of harvesting biometric data, applying hidden overlays to prevent removal, and rotating its command-and-control infrastructure dynamically to evade detection.
Google has since disabled associated malicious infrastructure and confirmed that no infected applications were found on official app distribution platforms.
The report further details how cybercriminals are using AI not only for attack creation but also for evasion and obfuscation. Certain malware families reportedly include AI-generated decoy logic, redundant code structures, and misleading functions designed to confuse automated detection systems.
In addition, threat actors are increasingly targeting software supply chains. Some campaigns have compromised open-source repositories and CI/CD pipelines to steal authentication tokens, cloud credentials, and API keys. These stolen assets are then used for further infiltration into enterprise systems and cloud environments.
Researchers warn that AI integration within development pipelines, particularly tools that connect multiple large language model providers, can become a high-value target for attackers seeking broad access to organizational infrastructure.
Defensive AI and Pipeline Security Become Critical
In response to the rising threat, Google has expanded its defensive AI initiatives. Tools such as automated vulnerability detection agents and AI-driven patching systems are being deployed to identify and remediate security flaws before they can be exploited.
Additionally, security systems like Google Play Protect are actively blocking known malware variants, including PROMPTSPY-related samples.
Cybersecurity experts emphasize that the integration of AI into both offensive and defensive cyber operations represents a major turning point in digital security. Attackers are now combining automation, generative models, and supply chain exploitation to scale operations beyond traditional limits.
Organizations are being urged to strengthen code review practices, audit CI/CD pipelines, monitor API key usage, and implement strict access controls for AI-related development tools.
