Smithery AI: A central hub for MCP servers
Smithery AI is a registry and management platform for Model Context Protocol (MCP) servers.
With MCP, large language model (LLM) clients can “mount” external servers to gain new tools, functions, or APIs. Smithery acts as an index for helping you discover, install and manage MCP servers.

These servers can run in two modes:
- Hosted/Remote – Deployed on Smithery’s infrastructure and accessed via the web.
- Local – Installed and run on your machine via the Smithery CLI.
Local MCP workflow
When you install an MCP server locally via the Smithery CLI, you’re essentially pulling the code or container and running it on your own system.
Step 1. Select a server
In the Smithery dashboard, pick the MCP server you want (e.g., GitHub, Playwright, etc.).
Step 2. Generate CLI Command
Smithery provides a command (e.g. smithery install ...) that references the server’s repository and version.
You also supply any required tokens, although it is strongly recommended that you only inject those tokens locally (for security).
Step 3. Run Locally
The server spins up on your machine, reading from a configuration file or environment variables (which may include your auth tokens).
Smithery tracks only the fact that you installed a server (for usage stats), not the token itself (per their Data Policy).
Mount in your LLM client
The new server is listed in your local environment, and your LLM sees it as an available MCP endpoint.
Example local CLI command
smithery install \
--server=github.com/smithery-ai/mcp-github \
—token=$MY_GITHUB_PAT
Where MY_GITHUB_PAT
is your GitHub Personal Access Token.
Security Note: According to Smithery’s data policy, config arguments are “ephemeral” and not stored on their servers.
Still, the best practice is never to paste your real token into an untrusted form—store it in local environment variables instead.
Remote (Hosted) MCP workflow

Some MCP servers are hosted on Smithery (you’ll see a “Remote” or “Hosted” label). In that scenario, Smithery handles running the MCP server in their infrastructure.
Hosted MCP
The code is running on Smithery’s machines, so your LLM can connect to that endpoint in real time.
Configuration
You typically pass your config (including tokens) to the hosted server through Smithery’s interface or their “ephemeral” config mechanism.
Smithery states these tokens are not stored long-term on their servers.
Tool calls & analytics
Smithery logs usage counts for hosted MCP servers (e.g., how many times a given tool is called). The actual data of your requests should remain ephemeral per the policy, but always verify with each server’s own documentation.
Access tokens & configuration: Where do tokens go?
Local Installs
Tokens stay on your machine; you inject them at install time or via environment variables.
Hosted MCPs

Tokens may be passed to Smithery’s backend so it can set up the server instance. The docs say this config data is not retained. Still, the exact token handling or refresh process is unclear (they don’t mention any automated refresh mechanism).
Example TypeScript SDK usage
For a hosted MCP server, you can use the official Model Context Protocol TypeScript SDK locally to interact with the exposed functionality:
import { createTransport } from "@smithery/sdk/transport.js"
import { Client } from "@modelcontextprotocol/sdk/client/index.js"
const transport = createTransport(
"https://server.smithery.ai/@smithery-ai/github",
{ "githubPersonalAccessToken": "YOUR_GITHUB_PAT" },
"YOUR_SMITHERY_API_KEY"
)
const client = new Client({ name: "Test client", version: "1.0.0" })
await client.connect(transport)
const tools = await client.listTools()
console.log("Available tools:", tools.map(t => t.name))
// Example call:
// const result = await client.callTool("createIssue", { title: "Bug Report", body: "Something broke" })
Here, you provide two tokens:
1. GitHub Personal Access Token (for GitHub calls).
2. Smithery API Key (to authenticate to Smithery’s service itself).
Data policy considerations
Ephemeral Config
Smithery emphasizes that when you provide configuration data (like tokens), they don’t store it long-term. For hosted MCP usage, the server’s code needs the token to make calls on your behalf.
Local vs. Hosted
- Local: Minimal data footprint on Smithery’s side (mostly just usage stats).
- Hosted: The server runs in Smithery’s environment, so they see call metadata. However, they claim to discard the actual tokens and request details.
Missing clarity
Official docs don’t detail how tokens are “passed down” or if there’s a refresh workflow. The recommended approach is to handle secrets locally and only pass them to Smithery’s system if absolutely necessary.
Final thoughts
Smithery AI provides a straightforward way to discover and manage a wide range of MCP servers:
- Local MCP: You have full control; tokens remain on your machine.
- Hosted MCP: Smithery manages the server; tokens are passed ephemerally.
- Security: Use environment variables for tokens, avoid untrusted fields, and consult each server’s policy to confirm how it handles data.
Adopting a local-first approach with environment variables and verifying each hosted MCP’s security posture is the best way to keep your tokens safe while leveraging the extended capabilities Smithery AI offers.