The First AI/ML Supply Chain Vulnerability Database
Detect, assess and remediate vulnerabilities in your AI/ML supply chain with detailed descriptions & infographics, automated vulnerability scanners and OSS maintainer provided fixes & remediation advice.
Vulnerabilities in ML Flow, Kubeflow, and Hugging Face Transformers, and more...
AI vulnerabilities that are Actually Impactful
Assess whether a concern is a legitimate threat to your organization with our detailed report analysis, attack scenarios and attack path diagrams. We curate our vulnerability feed to highlight security issues that have real-world impacts on AI applications and ML systems.
Available publicly on Nov 16 2023
Remote Code Execution via Controlled File Write
Threat overview
The vulnerability stems from the way MLflow processes model source URLs. Specifically, when a model is created with a source URL pointing to another model that, in turn, points to a malicious server, MLflow fetches and writes files as specified by the attacker. This behavior can be exploited to write arbitrary files on the system, leading to remote code execution. The lack of authentication by default and the ability to control the file path and content through a malicious server are key factors that facilitate this exploit.
Attack Scenario
An attacker sets up a malicious server to serve a JSON response that specifies a file path and content. The attacker then creates two models in MLflow: the first model's source points to the malicious server, and the second model's source points to the first model. When MLflow processes the second model, it fetches the file specification from the malicious server and writes the specified file to the system, allowing the attacker to execute arbitrary code.
Who is affected?
Any user or organization running MLflow versions 2.6.0 to 2.9.1 with default configurations is vulnerable to this attack. The vulnerability specifically affects environments where MLflow is used for tracking experiments, packaging code, and sharing and deploying models without requiring authentication.
Remediation that you can rely on
Remediate any vulnerability you face with maintainer-curated fixes and AI application-specific remediation advice. We work directly with the open source community to identify and remediate OSS vulnerabilities that effect AI applications and ML systems.
1
Update MLflow to version 2.9.2 or later.2
Ensure that authentication is enabled for MLflow to prevent unauthorized access.3
Regularly review and monitor model creation requests to detect any suspicious activity.4
Consider implementing network-level controls to restrict access to the MLflow server from untrusted sources.
Automated Vulnerability Scanners
Detect vulnerable services in your network by leveraging Nuclei templates to quickly evaluate your attack surface.
Other Featured Vulnerabilities
We see hundreds of vulnerabilities and highlight the most impactful ones, so that you never miss the next AI zero-day.
View all security advisoriesApr 16, 2024 at 12:00 AM UTC
Arbitrary Local File Read in Gradio via Component Method Invocation
A vulnerability in Gradio version 4.12.0 allows attackers to read arbitrary files on the server by exploiting the `/component_server` endpoint to call any method on a `Component` class, specifically `move_resource_to_block_cache()`. This issue was patched in version 4.13.0.
Apr 16, 2024 at 12:00 AM UTC
Remote Code Execution via Insecure Deserialization in BentoML
BentoML version 1.2.2 is vulnerable to remote code execution (RCE) through insecure deserialization, allowing attackers to execute arbitrary code by sending a malicious POST request. This vulnerability was patched in version 1.2.5.
Sightline Premium
Early Access
Get early access to fixed vulnerability before they're publicly disclosed. On average customers get a 31 day early warning before a vulnerability is set to become public.
Protect AI Platform Integration
Immediately see what vulnerabilities matter most by combining the context of your AI applications MLBOM (provided by Radar) with Sightline
Unlimited Access to Scanners
Leverage our Scanners immediately as they're released on Sightline Premium, instead of waiting for them to eventually get in to the public Vulnerability Feed.
API Access
Integrate our Vulnerability Feed in your existing workflows and tools by leveraging our API built upon the OSSF's Open Source Vulnerability standard.
Many eyes make all bugs shallow
Sightline is powered by our Huntr community - the world's first bug bounty board for AI/ML.
Together, we have helped protect AI by working with over 15k security researchers and maintainers who have earned over $500k finding & fixing vulnerabilities.
171
Vulnerabilities identified in the last 90 days
56
Vulnerabilities that are not yet public
35
Avg. days customers have had early access