No Comments

Flaw Exposes Docker Desktop Users to Code Execution and Data Theft

 

Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI). Notably, attackers could exploit the issue to execute arbitrary code and exfiltrate sensitive data.

Security firm Noma Labs has codenamed the critical vulnerability DockerDash. In response, Docker addressed the issue with the release of version 4.50.0 in November 2025.

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools,” Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.

“Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.”

As a result, successful exploitation can trigger critical-impact remote code execution on cloud and CLI systems or enable high-impact data exfiltration on desktop applications.

According to Noma Security, the issue originates from Ask Gordon’s treatment of unverified metadata as executable commands. Consequently, malicious instructions can propagate across multiple layers without validation, enabling attackers to bypass security boundaries. In effect, a single AI query can open the door to unauthorized tool execution.

Root Cause: Meta-Context Injection

With MCP acting as the connective tissue between a large language model (LLM) and the local environment, the flaw represents a failure of contextual trust. Researchers have characterized the issue as a case of Meta-Context Injection.

“MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” Levi said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.”

In a hypothetical attack scenario, a threat actor exploits a critical trust boundary violation in how Ask Gordon parses container metadata. To do so, the attacker crafts a malicious Docker image that embeds instructions within Dockerfile LABEL fields.

Although these metadata fields appear harmless, they become injection vectors once Ask Gordon AI processes them. The code execution attack chain unfolds as follows:

  • The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile
  • When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, exploiting its inability to distinguish between legitimate descriptions and embedded malicious instructions
  • Ask Gordon forwards the parsed instructions to the MCP Gateway, a Middleware layer that sits between AI agents and MCP servers
  • MCP Gateway interprets the request as Originating from a trusted source and invokes the Specified MCP tools without additional Validation
  • The MCP tool executes the command with the Victim’s Docker privileges, thereby Achieving code execution

Data Exfiltration via Docker Desktop

Meanwhile, the data Exfiltration variant Weaponizes the same prompt Injection flaw but targets Ask Gordon’s Docker Desktop implementation. In this case, Attackers use MCP tools to harvest sensitive internal data from the Victim’s environment by abusing the Assistant’s Read-only permissions.

As a result, Attackers can collect information such as Installed tools, container details, Docker Configuration data, mounted Directories, and network Topology.

Importantly, Ask Gordon version 4.50.0 also resolves a separate prompt Injection Vulnerability discovered by Pillar Security. That flaw could have allowed Attackers to hijack the assistant and Exfiltrate sensitive data by Manipulating Docker Hub Repository Metadata with Malicious instructions.

“The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi said. “It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.”

 


Source: TheHackerNews

Read more at Impreza News

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.