MCP Apps are here: Rendering interactive UIs in AI clients
What are MCP apps and why they’re going to change how you build apps on Claude and ChatGPT.
On January 26, 2026, the first-ever official MCP extension was announced: MCP Apps, a major extension to the Model Context Protocol that fundamentally changes how users interact with external tools through AI assistants. Instead of relying on text-based responses, MCP Apps enables third-party applications to render interactive UI components directly inside AI chat windows.
This extends MCP from pure data and action capabilities into the UI layer, enabling developers to build richer interactions across any MCP-compatible client: Claude, ChatGPT, Goose, VS Code, and more.
What are MCP Apps?
The Model Context Protocol, which Anthropic open-sourced in fall 2024 and donated to the Agentic AI Foundation in December, established a standard way for AI assistants to connect with external data sources and tools. Until now, these integrations were limited to programmatic actions and text-based outputs. You could ask Claude to fetch data from Asana or query your analytics dashboard, but the interaction remained conversational.
MCP Apps breaks this constraint. Tools can now return rich, interactive interfaces that render in sandboxed iframes within the chat experience. Users can manipulate dashboards, edit designs, compose formatted messages, and interact with live data, all without leaving the conversation or switching application contexts.
The technical architecture centers on UI resources that MCP servers declare and hosts render. When a tool needs to present an interface, it returns HTML and JavaScript that Claude's client renders in a controlled environment. All communication between the UI and the host happens through auditable JSON-RPC messages, maintaining a clear security boundary.
How it works
The architecture relies on two key MCP primitives:
1. Tools with UI metadata: Tools include a _meta.ui.resourceUri field pointing to a UI resource:
2. UI Resources: Server-side resources served via the ui:// scheme containing bundled HTML/JavaScript.
The host fetches the resource, renders it in a sandboxed iframe, and enables bidirectional communication via JSON-RPC over postMessage. All UI-to-host communication flows through this auditable channel.
Real-world scenarios
The immediate implications span enterprise workflows that traditionally require constant context-switching:
- Collaborative design and prototyping: A product team can discuss feature requirements in Claude while simultaneously creating and iterating on Figma diagrams. The canvas updates in real-time as team members provide feedback, eliminating the copy-paste cycle of moving design URLs between tools.
- Data exploration and decision-making: Sales teams can query ChatGPT about pipeline metrics and receive interactive dashboards from Amplitude or Hex. Instead of static charts, they can filter by region, drill into specific accounts, and export custom reports; all within the conversation that prompted the analysis.
- Project coordination: When discussing sprint planning, Goose can surface an interactive Asana timeline. Team members adjust due dates, reassign tasks, and update dependencies without opening a separate project management tool. The conversation context remains intact while the tactical work happens inline.
- Incident response: Security operations teams drafting incident notifications can compose and preview formatted Slack messages directly in ChatGPT. They see exactly how the message will appear, adjust formatting, and send—all while maintaining the thread of incident documentation they're building simultaneously.
The pattern is consistent: MCP Apps eliminates the cognitive overhead of moving between tools while trying to maintain mental context about what you're trying to accomplish.
Security model
Running third-party UI code inside an AI assistant raises obvious security concerns. Anthropic and the MCP Apps specification address this through multiple defensive layers:
- Iframe sandboxing: All UI content runs in sandboxed iframes with restricted permissions. The iframe cannot access the parent window or make arbitrary network requests.
- Pre-declared templates: Hosts can review HTML content before rendering. This prevents dynamic code injection where a malicious server might try to serve different content than initially approved.
- Auditable messages: All UI-to-host communication goes through loggable JSON-RPC. There's no backchannel. Security teams can inspect exactly what data flows between components.
- User consent: Hosts can require explicit approval for UI-initiated tool calls. If an interactive component tries to trigger an action, the host can demand user confirmation before proceeding.
The specification also recommends that users thoroughly vet MCP servers before connecting them, following the same due diligence they'd apply to any third-party integration.
Who supports MCP Apps now?
MCP Apps emerged from collaboration between multiple parties. The community projects MCP-UI (led by Ido Salomon and Liad Yosef) and OpenAI's Apps SDK both pioneered patterns for bringing UI into conversational AI. Rather than fragmenting the ecosystem with competing standards, Anthropic worked with OpenAI to create a shared specification.
The result is genuine cross-platform compatibility. MCP Apps works in:
- Claude (web and desktop, announced January 26)
- ChatGPT (support rolling out this week)
- Goose (the open-source reference implementation for MCP)
- Visual Studio Code
This interoperability is critical for developers. Build an MCP App once, and it works across multiple AI platforms that support the protocol. The alternative (building separate integrations for Claude, ChatGPT, and every other AI assistant) would fragment developer effort and limit adoption.
Launch partners include Amplitude, Asana, Box, Canva, Clay, Figma, Hex, monday.com, Slack, and Salesforce, with more integrations expected as the ecosystem matures.
The architectural shift
David Soria Parra, co-creator of MCP, frames the shift this way: "The industry has embedded assistants into individual apps, creating fragmented, siloed experiences. MCP inverts this by making apps pluggable components within agents. MCP Apps extends this further by bringing user interfaces into the agent experience itself."
This represents a fundamentally different model than Microsoft 365 Copilot or Google Gemini Workspace, which embed AI into productivity suites. MCP Apps positions the AI assistant as the primary interface, with tools as pluggable components that surface when needed. Both approaches aim to reduce context-switching, but differ in what becomes the "home base."
The architecture has implications beyond enterprise productivity tools. The specification is general-purpose. Developers can build games, calendars, maps, checkout flows, or any interactive experience that makes sense to embed in conversation. As Andrew Harvard from Block notes: "Developers can now build interactive experiences that render directly in conversation. At Block, we believe the future centers on users navigating through one trusted agent rather than context-switching between fragmented experiences."
Implementation considerations
For hosts
Adding MCP Apps support means implementing the rendering pipeline (fetch UI resources, sandbox in iframes, establish postMessage channels), handling the security model (review templates, require approvals for tool calls), and exposing the App API to UI code.
The implementation guide provides the specification. Hosts control their security posture—what gets reviewed, what requires approval, which servers are trusted.
For server developers
The ext-apps repository includes working examples: threejs-server for 3D visualization, map-server for interactive maps, pdf-server for document viewing, system-monitor-server for real-time dashboards, sheet-music-server for music notation. Pick one close to what you're building and start from there.
The development loop is straightforward: declare tools with _meta.ui.resourceUri, serve bundled HTML/JavaScript via the ui:// scheme, use the App class for bidirectional communication. The same code runs across all compliant hosts.
Enterprise adoption
Organizations evaluating MCP Apps should consider:
- Maintaining allowlists of approved servers rather than permitting arbitrary connections
- Auditing message logs between UI components and hosts
- Reviewing pre-declared templates before allowing new integrations
- Implementing least-privilege access controls for what servers can access
- Monitoring for unusual patterns that might indicate compromise
The specification provides the primitives to make this possible, but implementation is the organization's responsibility.
What's next?
MCP Apps is the first official MCP extension, establishing a pattern for how the protocol can grow beyond its core primitives. The specification is production-ready, with multiple clients shipping support and a mature SDK.
The ecosystem questions remain open: Will developers find building MCP Apps more valuable than traditional integrations? Will enterprise security teams approve running third-party UI code in their AI assistants? Will users prefer conversational interfaces with embedded tools over dedicated applications?
Early indicators are promising. Major platform vendors are supporting the specification. The developer tooling is solid.
But the real test comes from what developers build. The examples in the ext-apps repository show what's possible: 3D visualizations, interactive maps, PDF viewers, real-time monitors. These are proof-of-concept demonstrations. Production applications will reveal whether MCP Apps delivers on its promise of closing the context gap between models and users.
The foundation is in place. Now it's time to build.