What Is GrafanaGhost?
Researchers at AI security firm Noma Security have disclosed a vulnerability they call GrafanaGhost — an indirect prompt injection attack targeting the AI capabilities built into Grafana, a widely used observability platform. The findings were published on April 7, 2026. The core technical flaw has since been patched by Grafana Labs.
Grafana is deployed across enterprises to aggregate and monitor data tied to finances, telemetry, operations, infrastructure, and customer activity. Because Grafana sits at the center of an organization's most valuable information flows, a successful attack against a Grafana instance carries serious consequences.
How the Attack Works
The GrafanaGhost attack exploits how Grafana's AI components ingest and process external content. At a high level, an attacker places hidden malicious instructions on a web page they control. When Grafana's AI assistant encounters that content — for example, while a user browses log entries — it treats the embedded commands as legitimate context and acts on them without alerting the user.
Noma's researchers began investigating by mapping out every location where users could interact with Grafana's AI components, reasoning that any user-facing surface is a potential vector for prompt injection. They identified image tags as a viable channel for delivering malicious instructions.
Although Grafana applied protections to external images to guard against this type of attack, researchers found two techniques to bypass those safeguards:
- Protocol-relative URLs — used to circumvent domain validation checks.
- The "INTENT" keyword — used to disable AI model guardrails, causing Grafana to interpret an external prompt as benign.
With those bypasses in place, data exfiltration began automatically as soon as the malicious image started rendering, with the victim entirely unaware of what was happening in the background.
No Phishing Click Required
A notable characteristic of GrafanaGhost is that it does not rely on tricking a victim into clicking a suspicious link. Sasi Levi, security research lead at Noma Security, explained the mechanics to Dark Reading:
"[The attacker needs] to get their indirect prompt stored in a location that Grafana's AI components will later retrieve and process. Once that payload is sitting in the data store, it waits and fires automatically when any user performs a normal interaction with their Grafana instance (like browsing entry logs). The user is the unwitting trigger, not the target of a phishing attempt. That's what makes it so stealthy."
In other words, the attacker's only requirement is to get their malicious payload into a data source that Grafana's AI will eventually query — after which the attack executes autonomously during routine platform use.
Grafana's Response and a Disputed Characterization
Noma followed responsible disclosure protocols, and Grafana Labs responded swiftly. According to Noma, Grafana "jumped on the issue immediately, worked closely with us to validate the findings, and rolled out a fix as fast as possible to secure their users."
Grafana Labs CISO Joe McManus confirmed to Dark Reading that the vulnerability affected the company's image renderer within its Markdown component, and that it was "quickly patched." He also stated: "We emphasize that there is no evidence of this bug having been exploited in the wild, and no data was leaked from Grafana Cloud."
However, Grafana disputed Noma's characterization of the exploit as "zero-click." McManus argued that successful exploitation would have required significant user interaction:
"Any successful execution of this exploit would have required significant user interaction — specifically, the end user would have to repeatedly instruct our AI assistant to follow malicious instructions contained in logs, even after the AI assistant made the user aware of the malicious instructions."
Noma Pushes Back
Noma's Levi challenged Grafana's account of the exploit mechanics, telling Dark Reading that the attack requires "fewer than two steps" and that the AI assistant never surfaced any warning to the user about malicious instructions present in the log entries.
"There was no alert, no flag, no prompt asking the user to confirm. The model processed the indirect prompt injection autonomously, interpreting the log content as legitimate context and acting on it silently, without restriction, and without notifying the user that anything unusual was occurring. The user had no visibility into what was happening in the background and no opportunity to intervene."
Levi added: "We respect Grafana's quick response to the patch and their commitment to user security. But we can't let an inaccurate characterization of the exploit mechanics stand unchallenged. The findings are documented, and we're confident in what the research shows."
Why This Matters for AI-Integrated Platforms
GrafanaGhost highlights a growing class of security risk that emerges when AI assistants are integrated into platforms that routinely process external, attacker-influenced data. Indirect prompt injection — where malicious instructions are embedded in content the AI retrieves, rather than typed directly by a user — is difficult to defend against because the attack surface is determined by what the AI reads, not what the user explicitly inputs.
As observability tools like Grafana incorporate more AI-driven features, the risk that AI components will be weaponized to access or exfiltrate the sensitive data those tools are designed to consolidate becomes an increasingly pressing concern for security teams. Organizations running Grafana deployments that include AI assistant functionality should verify they are running a patched version following Grafana Labs' remediation of the image renderer issue.
Source: Dark Reading