Safety company Noma recently released a research report, revealing a security vulnerability named "GrafanaGhost" in the AI assistant feature of the open-source monitoring and data visualization platform Grafana. This vulnerability allows hackers to use "indirect prompt injection" to trick the AI assistant into leaking sensitive corporate data to an external server.

image.png

"Indirect Prompt Injection": Covert Data Theft

According to researchers, Grafana's built-in AI assistant allows users to query and analyze monitoring data through natural language. However, hackers can embed malicious instructions in external web pages that Grafana can access.

When the AI assistant parses this contaminated content, it may be misled into bypassing existing security mechanisms and triggering external requests. Sensitive information is sent to a hacker-controlled server in the form of URL parameters. Since the entire process does not generate obvious error messages, ordinary users often fail to detect the anomaly.

Official Response: Non-zero-click Vulnerability, Now Fixed

Regarding this vulnerability, Joe McManus, Chief Security Officer at Grafana Labs, stated that the company quickly fixed the issue after being notified. He also emphasized the limitations of the vulnerability:

  • Non-automated Attack: This vulnerability is not classified as a "zero-click" or "self-propagating" attack.

  • Access Requirement: Hackers need to first gain access to the user's device before they can interact with the AI assistant.

  • Multiple Triggers: Achieving malicious operations typically requires multiple interactions, not a single action.

Grafana Labs further stated that there is currently no evidence that the vulnerability has been exploited, and no data breaches have been found in its cloud service (Grafana Cloud). The official urged users not to be overly concerned and recommended keeping an eye on and updating to the fixed secure version to ensure the security of the monitoring environment.