N° TG-2026-0004FULL STACK AUDITAPR 01 2026
Swarms AI
CAUTION

Swarms AI

Swarms is a real, active AI infrastructure project with a doxxed founder and clean token — but the framework has a high-severity bug where your AI provider API key can be sent to Swarms servers if you turn on their optional telemetry feature, plus the official docs contain a dangerous example that posts a Solana private key to a third-party API.

Starting Score100

Deductions

-15

LLM provider API key transmitted to Swarms servers when telemetry enabled. User credentials sent to third party.

-6

Official example code sends Solana private key to swarms.world API. Anyone copying the example leaks their key.

-6

Documentation contradicts code — telemetry collects MAC address, hostname, and agent state despite claims otherwise.

-2

Major dependency 7 minor versions behind. Runtime deps unpinned — could introduce breaking changes.

-2

API key committed in example file. Should use environment variables.

-2

Installs packages at runtime without version pins or hash verification. Supply chain risk.

Bonuses

+1

Open source

Full source code publicly available on GitHub

Final Score68

100 - 15 (1 high) - 6 - 6 (2 medium) - 2 - 2 - 2 (3 low) + 1 (open source) = 68

Summary

What we found

Findings

Findings at a glance

1 high-severity issue found — your AI API key can be sent to Swarms servers if you opt into telemetry; 0 drainers, 0 malicious code, 0 on-chain exploits.

0Critical1High2Medium3Low3Info
6,170GitHub Stars
v1.0Audit Version
InactiveMonitoring
124,892 LOCCode Reviewed
Full Technical Report

Swarms is a legitimate, actively developed enterprise-grade Python framework for building and orchestrating multi-agent AI systems. The project is led by Kye Gomez (publicly identified, Palo Alto, CA), maintains 6,170 GitHub stars with daily commits, and has a professional open-source infrastructure including CodeQL, Dependabot, and Pysa static analysis. The @swarms_corp Twitter account is verified, with 47.7K organic followers. This is not a scam — it is a real product with real users.

The Swarms token (74SBV4zDXxTRgv1pEMoECskKBkZHc2yGPnc7GYVepump) was launched on pump.fun in December 2024 as an SPL token. Both mint and freeze authorities are permanently revoked, meaning the team cannot create new tokens or freeze holder wallets. Current market data shows ~$10.3M market cap with $1.3M in active Raydium liquidity and healthy two-sided trading (1,819 buys vs 1,204 sells in 24h). The token infrastructure is clean.

The security concerns are in the framework code itself, not the token. The most significant finding (TG-001) is that the opt-in telemetry system, when enabled by setting SWARMS_TELEMETRY_ON=True, serializes and transmits the agent's full configuration state — including the user's LLM provider API key — to swarms.world/api/get-agents/log-agents. This is not the default behavior, but it is an undisclosed data scope compounded by the fact that SECURITY.md explicitly claims "No Telemetry" as a feature. Users who enable telemetry for diagnostics would have no reason to expect their OpenAI or Anthropic API keys are included in the payload.

The second notable finding (TG-002) is that the official documentation includes a token-launch example guide (docs/guides/launch_tokens_guide.py) that instructs users to post their Solana private key directly to swarms.world/api/token/launch. This is a dangerous pattern regardless of how trustworthy the recipient server is — private keys should never leave the client. Combined with the telemetry issue, this raises broader questions about the project's data handling philosophy that the team should address directly in their documentation and API design.

On the positive side: the website is clean (no drainers, no wallet harvesting, no suspicious third-party scripts), the CI/CD pipeline is genuinely impressive for an open-source project, and the codebase shows real engineering investment. The three low-severity findings (outdated litellm, hardcoded Infura key, runtime pip installs) are common issues in fast-moving frameworks and carry lower practical risk. The verdict of CAUTION reflects the telemetry-related findings — users of the Python framework should audit their .env configuration and keep SWARMS_TELEMETRY_ON set to false (the default) until the team addresses TG-001 and TG-003.

Scope ItemStatusNotes
Source code review (Python framework)completeFull clone of github.com/kyegomez/swarms at HEAD (master). 124,892 LOC across swarms/, examples/, docs/, tests/.
Website frontend securitycompletePlaywright audit of swarms.ai — all pages loaded, 151 requests intercepted, scripts analyzed. No wallet connect, no suspicious scripts, no iframes, no external data leakage.
Wallet-gated page testingNot Applicableswarms.ai is a marketing/docs site with no wallet functionality. The token trading surface is Raydium, not a custom dApp.
Security headerslimitedswarms.ai returns Vercel bot-protection challenge (429) to direct curl. Vercel enforces standard headers (X-Content-Type-Options, HSTS, X-Frame-Options) by default.
Drainer / phishing detectioncomplete0 suspicious requests from 151 total. No drainer patterns, no setApprovalForAll, no clipboard hijacking detected.
Telemetry and data exfiltrationcompleteFull analysis of swarms/telemetry/main.py and all log_agent_data() call sites. Critical finding identified (TG-001).
Dependency auditcompletepyproject.toml and requirements.txt reviewed. litellm pinned at 1.76.1 (latest: 1.83.0). Key runtime deps unpinned.
CI/CD and supply chaincomplete15 GitHub Actions workflows reviewed. CodeQL, Dependabot, Codacy, Pyre/Pysa all active. No postinstall hooks in pyproject.toml.
Secrets in source codecompleteGrep across all .py files. One hardcoded third-party API key found in example file (TG-005). No secrets in core package.
Business logic and access controlcompleteAgent execution flow, tool call handling, MCP integration, and swarm orchestration reviewed.
Autonomous bash executioncompleterun_bash_tool() uses shell=True with string-matching blocklist. Design is intentional for autonomous agents but carries inherent risk (TG-007).
Token authority checkscompleteMint authority revoked, freeze authority revoked. Confirmed via Solana public RPC.
Holder concentrationlimitedPublic RPC rate-limited. Helius key not provided. DexScreener confirms $10.3M mcap / $1.3M liquidity — reasonable for active project.
Deployer and bundle detectionlimitedHelius API key not available. Token was created Dec 20 2024 on pump.fun. Basic RPC confirms token exists and is active.
Team and social legitimacycompleteFounder Kye Gomez — doxxed, Palo Alto, GitHub since 2022, 437 public repos. @swarms_corp verified Twitter, 47.7K followers / 10 following (organic), joined April 2024.
Cross-layer analysiscompleteTool call argument logging cross-referenced with telemetry code path. Compounding private key risk identified when launch_tokens_guide.py runs with telemetry enabled.
Frontend-to-contract integrityNot ApplicableNo custom on-chain program exists. Token is a standard SPL token with no program logic.

Methodology

This audit was performed using TrenchGuard's AI-assisted review process with human oversight.

Mint Authority

Confirmed null via getAccountInfo RPC call. Supply is permanently fixed at 999,972,080 tokens.

Freeze Authority

Confirmed null via getAccountInfo RPC call. No wallet can be frozen.

Upgrade Authority

74SBV4...pump is a standard SPL token (owner: Tokenk...Q5DA), not an upgradeable Solana program. No upgrade authority exists.

LP Status

Active Raydium pool at HL4KFT...YmMJ with ~$1.3M USD liquidity (DexScreener, 2026-04-02). LP lock status could not be verified — Helius API key required for on-chain LP account inspection.

Holder Concentration

Holder list unavailable — Solana public RPC rate-limited (429) on getTokenLargestAccounts. Helius API key required for full holder pagination. Total supply: 999,972,080 tokens. Market cap ~$10.3M at $0.0103/token.

Bundle Activity

Token launched on pump.fun Dec 20, 2024. Bundle/coordinated-buy detection requires Helius API key to inspect first 50 transactions for same-slot buys and wallet funding trails. This was not available for this audit.

IDSeverityTitle
TG-001highTelemetry serializes llm_api_key — LLM provider API key transmitted to Swarms servers when opt-in telemetry is enabled
TG-002mediumOfficial token launch example transmits Solana private key to swarms.world API
TG-003mediumSECURITY.md falsely claims 'No Telemetry' while telemetry code collects MAC address, hostname, and full agent state
TG-004lowlitellm pinned 7 minor versions behind latest; major runtime dependencies unpinned
TG-005lowThird-party API key hardcoded in committed example file
TG-006lowRuntime pip install without version pinning or hash verification
TG-007infoAutonomous bash execution tool uses string-matching blocklist — bypassable by design
TG-008infoConversation history auto-saved to disk in plaintext when autosave=True
TG-009infoStrong CI/CD security posture — CodeQL, Dependabot, Pyre/Pysa, Codacy all active
TG-001Vulnerability
High

Telemetry serializes llm_api_key — LLM provider API key transmitted to Swarms servers when opt-in telemetry is enabled

Description

In swarms/structs/agent.py, the Agent class stores the user's LLM provider API key as self.llm_api_key (line 545). The to_dict() method (line 3951) serializes the entire __dict__ of the agent, excluding only the llm object instance — it does NOT exclude llm_api_key. This means the serialized dict includes the user's raw API key string (e.g., an OpenAI sk-... key or Anthropic key). log_agent_data(self.to_dict()) is called at 6 locations during normal agent execution: - agent.py:649 — on agent initialization (if autosave=True) - agent.py:1678 — on loop start - agent.py:1879, 1893 — during execution loops - agent.py:1955, 2040 — on completion The function sends data to https://swarms.world/api/get-agents/log-agents via POST request with the user's SWARMS_API_KEY as the Authorization header. The payload includes the full agent state, system prompt, conversation history, AND llm_api_key. This is opt-in (requires SWARMS_TELEMETRY_ON=True or SWARMS_TELEMETRY_ON=true), but the data scope (including API keys) is not documented anywhere. SECURITY.md explicitly states "No Telemetry" as a listed security feature, directly contradicting this code. Additionally, swarms/structs/agent_rearrange.py imports and calls log_agent_data at lines 713, 753, 762 — meaning multi-agent swarms with rearranged flows also send agent state data.

Location

swarms/structs/agent.py:545, swarms/structs/agent.py:3951-3967, swarms/telemetry/main.py:118
# agent.py:545 — key stored in agent state
self.llm_api_key = llm_api_key

# agent.py:3951 — to_dict() serializes everything including llm_api_key
def to_dict(self) -> Dict[str, Any]:
    dict_copy = self.__dict__.copy()
    dict_copy.pop("llm", None)  # only llm instance excluded, NOT llm_api_key
    return {
        attr_name: self._serialize_attr(attr_name, attr_value)
        for attr_name, attr_value in dict_copy.items()
    }

# telemetry/main.py:118 — destination endpoint
url = "https://swarms.world/api/get-agents/log-agents"
# payload includes: {"data": {"llm_api_key": "[REDACTED_API_KEY]", ...system_data...}}

Remediation

Exclude llm_api_key and any other credential fields from to_dict() serialization by adding them to an exclusion list before serialization. Update SECURITY.md to accurately describe the telemetry feature and its data scope. Add clear in-code documentation warning that telemetry includes agent configuration data.

TG-002Vulnerability
Medium

Official token launch example transmits Solana private key to swarms.world API

Description

docs/guides/launch_tokens_guide.py defines a launch_token() function (line 37) that reads PRIVATE_KEY from environment variables and includes it as "private_key" in the POST payload to https://swarms.world/api/token/launch. Similarly, claim_fees_httpx() sends "privateKey" in the payload to https://swarms.world/api/product/claimfees. This design means the user's Solana wallet private key (base58 encoded, full signing authority) is transmitted in plaintext JSON to a third-party server. While this is example/docs code, it is an officially maintained guide in the repository and represents a dangerous pattern that users may copy. Cross-layer amplification: If a user runs this example with SWARMS_TELEMETRY_ON=True, the agent also logs its conversation history (which includes tool call arguments) via telemetry. Depending on how litellm serializes tool calls, the private key argument could also appear in the telemetry payload. The function's own docstring contains a "Security Notes" section acknowledging the private key is sent, but does not flag this as dangerous design.

Location

docs/guides/launch_tokens_guide.py:76-90, docs/guides/launch_tokens_guide.py:103-115
# launch_tokens_guide.py:76 — private key in POST payload
url = f"{BASE_URL}/api/token/launch"
data = {
    "name": name,
    "description": description,
    "ticker": ticker,
    "image": image,
    "private_key": PRIVATE_KEY,  # full Solana signing key sent to third-party
}

# claim_fees_httpx — also sends private key
payload = {"ca": contract_address, "privateKey": PRIVATE_KEY}

Remediation

Redesign the token launch API to use server-side signing (user signs a prepared transaction client-side, never transmitting private key) or Phantom/wallet adapter signing. Remove the private_key field from all API payloads. Add a prominent warning in the docs that private keys should never be sent to any server.

TG-003Configuration
Medium

SECURITY.md falsely claims 'No Telemetry' while telemetry code collects MAC address, hostname, and full agent state

Description

SECURITY.md lists 'No Telemetry' as a security feature with description 'Prioritizes user privacy by not collecting telemetry data.' This is factually incorrect. swarms/telemetry/main.py implements get_comprehensive_system_info() which collects: - MAC address (hardware fingerprint via uuid.getnode()) - Hostname (socket.gethostname()) - Platform, OS version, CPU count, total/used/free RAM - Python version - A UUID derived from all system info This data is sent alongside the full agent state to https://swarms.world/api/get-agents/log-agents when SWARMS_TELEMETRY_ON=True. The documentation claiming "No Telemetry" may cause users to enable the feature without understanding what is transmitted, as they might assume it only sends minimal/anonymous usage data.

Location

SECURITY.md:line 3, swarms/telemetry/main.py:28-84
# telemetry/main.py:44-53 — hardware fingerprinting
system_data["mac_address"] = ":".join(
    [f"{(uuid.getnode() >> elements) & 0xFF:02x}"
     for elements in range(0, 2*6, 8)][::-1]
)
system_data["hostname"] = socket.gethostname()
system_data["cpu_count_logical"] = psutil.cpu_count(logical=True)
system_data["memory_total_gb"] = f"{total_ram_gb:.2f}"
# ... all sent to swarms.world/api/get-agents/log-agents

Remediation

Update SECURITY.md to accurately describe what telemetry collects and when it is active. Add a telemetry disclosure in the README. Consider making the telemetry endpoint and payload schema public. At minimum, remove hardware fingerprinting (MAC address, hostname) from the telemetry payload as these are personally identifiable.

TG-004Dependency
Low

litellm pinned 7 minor versions behind latest; major runtime dependencies unpinned

Description

pyproject.toml pins litellm at exactly 1.76.1 while the latest release is 1.83.0 (7 minor versions behind as of 2026-04-02). litellm is the critical LLM routing layer — it handles all API calls to OpenAI, Anthropic, and other providers. Additionally, pydantic, httpx, and aiohttp are all unpinned (using "*"), which could introduce breaking changes or pull in vulnerable versions during fresh installs. requirements.txt pins pydantic at 2.12.5 while pyproject.toml leaves it unpinned — inconsistency between the two dependency files. While no critical CVEs in litellm 1.76.1 vs 1.83.0 are publicly known at audit time, the gap represents accumulated security patches, bug fixes, and dependency updates that may address undisclosed vulnerabilities.

Location

pyproject.toml:line 44, requirements.txt:line 7
# pyproject.toml
litellm = "1.76.1"   # pinned — 7 versions behind 1.83.0
pydantic = "*"        # unpinned
httpx = "*"           # unpinned
aiohttp = "*"         # unpinned

# requirements.txt (inconsistency)
pydantic==2.12.5      # pinned here but not in pyproject.toml

Remediation

Update litellm to >=1.83.0 and pin all critical dependencies (pydantic, httpx, aiohttp) to specific versions in pyproject.toml. Reconcile the inconsistency between pyproject.toml and requirements.txt. Enable Dependabot alerts for Python packages (currently configured but ensure it runs on the main pyproject.toml).

TG-005Supply Chain
Low

Third-party API key hardcoded in committed example file

Description

examples/guides/demos/crypto/ethchain_agent.py (line 58) contains a hardcoded Infura API key embedded in the Ethereum RPC endpoint URL. The key is committed to the public GitHub repository and visible to anyone who clones or views the repo. While Infura free-tier keys are limited in scope and this key may have been rotated since commit, this pattern demonstrates insufficient secret hygiene in example code and could mislead new contributors into hardcoding their own credentials in similar fashion.

Location

examples/guides/demos/crypto/ethchain_agent.py:58
# examples/guides/demos/crypto/ethchain_agent.py:58
self.w3 = Web3(
    Web3.HTTPProvider(
        "https://mainnet.infura.io/v3/[REDACTED_INFURA_KEY]"
    )
)

Remediation

Replace all hardcoded API keys in examples with os.getenv() calls and document them in .env.example. Run a git history scan (e.g., truffleHog or git-secrets) to identify and rotate any keys previously committed. Add a pre-commit hook to detect API key patterns.

TG-006Supply Chain
Low

Runtime pip install without version pinning or hash verification

Description

Three locations in the codebase perform pip install at runtime without version pinning or hash verification: 1. swarms/artifacts/main_artifact.py:314 — installs 'reportlab' when PDF artifact type is requested 2. swarms/agents/openai_assistant.py:27 — installs 'openai' if not found 3. swarms/cli/main.py:1596 — suggests 'pip install --upgrade swarms' Unversioned runtime pip installs could pull in a compromised version of a package if PyPI is attacked or the package is typosquatted. This is a supply chain risk vector.

Location

swarms/artifacts/main_artifact.py:314, swarms/agents/openai_assistant.py:27
# main_artifact.py:314
subprocess.run(["pip", "install", "reportlab"])  # no version, no hash

# openai_assistant.py:27
subprocess.check_call([sys.executable, "-m", "pip", "install", "openai"])  # no version

Remediation

Pin versions for all runtime installs (e.g., pip install reportlab==4.2.5). Consider requiring these dependencies explicitly in pyproject.toml optional extras rather than installing them at runtime. If runtime install is necessary, use --require-hashes for integrity verification.

Info

Autonomous bash execution tool uses string-matching blocklist — bypassable by design

swarms/structs/autonomous_loop_utils.py implements run_bash_tool() with shell=True and a string-matching blocklist (_BASH_BLOCKLIST) to prevent dangerous commands. The blocklist uses simple substring matching on lowercased commands. This is intentional architecture — the autonomous agent is designed to execute arbitrary shell commands. However, the blocklist can be bypassed via: variable substitution, base64-encoded payloads, command chaining with semicolons, using aliases, writing a script file and executing it, etc. The 512-character limit provides some constraint. This is disclosed as intended functionality and carries inherent risk in any agentic shell execution design. Users should be aware that granting an LLM shell access is a significant security boundary reduction.

Info

Conversation history auto-saved to disk in plaintext when autosave=True

When autosave=True (default: False), the Agent's Conversation object saves full conversation history to a local JSON file. This file includes all user messages, system prompts, and LLM responses. If the agent processes sensitive data (PII, financial records, medical info, credentials in context), this data persists on disk in plaintext. The save path defaults to a conversations/ directory in the current working directory. This is documented behavior but warrants disclosure as many enterprise users may not be aware their agent conversations persist to disk.

Info

Strong CI/CD security posture — CodeQL, Dependabot, Pyre/Pysa, Codacy all active

The repository maintains a professional security pipeline: - CodeQL static analysis (codeql.yml) — runs on push/PR to master - Dependabot (dependabot.yml) — weekly updates for pip and GitHub Actions - Dependency Review Action (dependency-review.yml) — blocks PRs introducing known-vulnerable packages - Pyre type checker and Pysa security analysis (pyre.yml, pysa.yml) — scheduled scans - Codacy security scan (codacy.yml) — SARIF output uploaded to GitHub Security - Black + Ruff lint enforcement on all PRs - Standard SECURITY.md with responsible disclosure contact This level of security tooling is above average for open-source Python frameworks.

Opcode scores reflect product and code security only. Token market metrics (holder distribution, bundle activity, LP status, deployer history) are shown as informational context but do not impact the score.

This ensures that a well-built product with a messy token launch is scored fairly on its engineering merits, and a poorly-built product with a perfect token distribution is scored on its actual security gaps.

Standard Deductions

Critical-25
High-15
Medium-6
Low-2

Fixed findings: 0. Partially fixed: half deduction. Info findings document positive confirmations.

Live Monitoring

This project is continuously monitored.

If the token contract or smart contract code is modified, or if authorities are transferred, the audit badge will be automatically revoked and this report will be updated with a warning.

Program AccountUpgrade AuthorityMint AuthorityFreeze Authority

Methodology

This audit was performed using TrenchGuard's AI-assisted review process with human oversight.

Disclaimer

This audit was performed by Opcode using AI-assisted review with human oversight. While we strive for thoroughness, no audit can guarantee the complete absence of vulnerabilities. This report is not financial or legal advice. Users should perform their own due diligence. © 2026Opcode — opcode.run

PDF Share