PromptFlux (often stylized as PROMPTFLUX) is an experimental AI-generated malware family first identified by Google Threat Intelligence Group (GTIG) in early June 2025. It represents one of the earliest documented examples of malware that actively uses a large language model (LLM) during runtime to dynamically modify its own code — a technique Google describes as “just-in-time” AI or self-modification for evasion.
As of March 2026, PromptFlux remains a landmark proof-of-concept rather than a widespread active threat, but it foreshadows more mature agentic and self-evolving malware in the future. Organizations should treat it as an early warning of where AI-cyber threats are heading.
This marks a pivotal shift in cyber threats: rather than relying solely on pre-written, static (or even traditionally polymorphic) code, the malware outsources parts of its evolution to a generative AI model, making each instance potentially unique and harder to detect with signature-based tools.
Core Characteristics and Functionality
- Programming Language — Written primarily in Visual Basic Script (VBScript), a legacy scripting language still common in Windows environments for its simplicity and built-in execution capabilities.
- Type — Classified as a dropper malware. It primarily serves to deliver or regenerate payloads while masking its activities (e.g., by decoding and executing an embedded decoy installer).
- Primary Goal — Evasion of static signature-based antivirus and endpoint detection tools through continuous code regeneration and obfuscation.
- No Full Compromise Observed — As of the initial discovery and analyses through late 2025, samples appeared to be in development or testing phases. They lacked complete mechanisms to fully compromise victim networks or devices, and no widespread successful infections were publicly reported. Google took action to disable associated API assets.
How PromptFlux Works: The “Thinking Robot” Module
The most innovative (and concerning) component is a module referred to as the “Thinking Robot”. This is essentially an embedded logic that periodically queries an external LLM to rewrite parts — or even the entire — source code of the malware.
- API Integration — It connects to Google’s Gemini API (often using models like Gemini 1.5 Flash or the “-latest” tag to access the most current version).
- Prompt Engineering — The malware sends carefully crafted, machine-parsable prompts instructing the LLM to act as an “expert VBScript obfuscator.” Examples include requests for:
- Specific obfuscation and evasion techniques.
- Small, self-contained VBScript functions or code blocks.
- Full rewrites of the malware’s source code.
- Regeneration Cycle — In advanced variations, the malware regenerates its entire code every hour (or on a similar timed interval). The new, obfuscated version is saved (e.g., to the Windows Startup folder for persistence across reboots).
- Persistence & Propagation — It establishes persistence via the Startup folder and attempts lateral movement by copying itself to removable drives (e.g., USB) and mapped network shares.
- Evasion Focus — By constantly altering structure, variable names, logic flow, and embedding new evasion tricks suggested by the LLM, it aims to defeat hash-based or pattern-matching detection. Each iteration becomes a “new” sample from a static analysis perspective.
This “just-in-time” approach treats the LLM as a live, updating obfuscation-as-a-service engine, reducing the need for attackers to manually update payloads.
Attribution and Context
- Threat Actor — Not definitively attributed to any specific group (e.g., no clear links to state-sponsored APTs like those seen with PromptSteal/APT28). File naming and behaviors align more with financially motivated actors (cybercriminals seeking profit via commodity malware or initial access sales).
- Development Stage — Experimental/proof-of-concept. Samples included inactive components, rate-limiting on API calls, and incomplete features — suggesting testing rather than mature operational use.
- Broader Family — Part of a small cluster of early AI-integrated malware families identified by GTIG in 2025 (e.g., PROMPTSTEAL for data exfiltration via LLM-generated commands, FRUITSHELL, QUIETVAULT, PROMPTLOCK).
Implications for Cybersecurity
PromptFlux demonstrates how accessible LLMs lower barriers for dynamic evasion techniques. Traditional defenses (signature-based AV, static analysis) become less effective against hourly-mutating code. Defenders must shift toward:
- Behavioral/anomaly detection.
- Network monitoring for unusual API calls (especially to Gemini or similar services).
- Endpoint controls that restrict script execution and outbound connections.
- AI-specific mitigations, like monitoring for prompt-like strings or LLM traffic patterns.
