Best practices for MCP secrets management
Every outbound call from your MCP server carries credentials—API keys, database passwords, OAuth tokens, you name it. If those secrets leak, the blast radius extends far beyond your LLM demo. Learn to secure your MCP secrets.
The new blast radius
Large-language models (LLMs) used to be fairly self-contained—feed in a prompt, get back a completion, call it a day. The Model Context Protocol (MCP) changes the game by letting models hit live APIs, query fresh data, and even trigger CI/CD pipelines.
Cool? Absolutely. Risky? Also absolutely.
Every outbound call from your MCP server carries credentials—API keys, database passwords, OAuth tokens...and more.
If those secrets leak, the blast radius extends far beyond your LLM demo.
This guide reviews the challenges of MCP server security and lays out concrete, engineer-friendly tactics for keeping sensitive data under wraps.
What an MCP server actually does
Think of an MCP server as a multilingual router:
- It takes a natural-language request from the model—“Give me the last 50 GitHub issues”—and maps it to a structured tool invocation.
- It executes that call against an external service, authenticating with some secret.
- It packages the response back into something the LLM understands.
Because the server sits in the middle, it often needs privileged access to production systems. That makes it a prime target.
Why secrets go missing — common anti-patterns
As we wrote about in Best practices for secrets management, there are a number of common anti-patterns to avoid:
Six best practices for MCP secrets management
Treat every secret like a live grenade—handle it sparingly, store it safely, and rotate it often.
1 Eliminate hard-coding
Pull secrets from environment variables or a dedicated secrets manager—never from source control.
# .env (never committed to version control and added to .gitignore)
GITHUB_TOKEN=ghp_************************
Reference them via environment variables in your implementation:
// mcp-server/src/clients/github.ts
const token = process.env.GITHUB_TOKEN;
Heads-up: env vars are fine for 12-factor apps, but on Kubernetes or Nomad anyone with exec access can run env to read them. For high-sensitivity creds consider tmpfs mounts, sealed-secrets, or an injector sidecar that feeds secrets over a UNIX socket.
2 Prefer dynamic, short-lived credentials
Tools such as HashiCorp Vault or the open source AWS vault command line tool (dynamic secrets) or AWS STS can mint time-boxed creds on demand. If compromised, they self-destruct quickly.
3 Apply least privilege with RBAC
Each tool integration should have its own role with the minimum scope required. Products like WorkOS AuthKit provide fine-grained authorization overlays that plug directly into MCP servers and Cloudflare Workers.
# Vault policy example
path "kv/data/github" {
capabilities = ["read"]
}
4 Enforce end-to-end encryption
TLS is table stakes. You can go further, and enable mutual TLS between the MCP server and your secrets backend to block man-in-the-middle attacks.
5 Rotate and revoke automatically
Schedule rotation jobs or use vendor-supplied auto-rotation. AWS Secrets Manager, for instance, can rotate an RDS password via Lambda on a cron schedule.
6 Log, alert, repeat
Pipe audit logs to a SIEM so that “unauthorized read at 02:13 UTC” triggers a page—not a post-mortem.
Choosing the right tool for the job
All four integrate cleanly with MCP servers via REST, gRPC, or language SDKs—and WorkOS layers identity controls on top of whatever secrets backend you choose.
Implementing secrets management in an MCP workflow

Why this matters
- Tokens never persist to disk (and logs/core dumps are scrubbed).
- The LLM never sees the raw credential.
- Vault issues the token and records an audit entry.
- WorkOS can gate which tools an agent may invoke based on the user’s role, adding a second line of defence.
Security principles baked into the MCP spec
- Explicit user consent — every tool invocation must be approved by the end user.
- Minimum necessary data — hosts should shield user data unless explicitly permitted.
- Tool safety — each tool is treated as arbitrary code; the spec mandates human approval before execution.
- Sampling transparency — users decide which parts of the prompt/response the server can inspect.
Your secrets strategy should dovetail with these principles: grant only the scope required, log every access, and surface any deviation for human review.
Beyond the happy path — threat-model like an attacker
- Assume breach — design as if an attacker already has network access. Can they walk from the MCP host to your secrets store?
- Beware prompt injection — a malicious prompt could trick the server into calling dangerous tools. Validate operations, not just credentials.
- Lock down filesystem mounts — scope directory paths explicitly (/data/reports/*.csv > /data/* > /) and run integration tests that attempt to exfiltrate /etc/passwd; fail the build if they succeed.
- Plan incident response — know how to revoke all active secrets in minutes, not hours.
Final thoughts: small habits, huge payoffs
Securing an MCP server isn’t dark magic.
It’s engineering hygiene: no hard-coding, least privilege, dynamic creds, aggressive rotation, airtight filesystem scopes, and ruthless logging—bolstered by identity layers like WorkOS AuthKit that enforce consent and RBAC.
The payoff is massive: you preserve user trust, keep auditors happy, and sleep better knowing your LLM won’t accidentally DROP your production database—or leak /var/log/auth.log—on livestream.
Lock down your secrets today, and your future self (and your security team) will thank you tomorrow.