In this article
September 3, 2025
September 3, 2025

MCP Night 2.0 Panel Discussion: The Future of AI Integration

Representatives from OpenAI and Anthropic discuss the origins of MCP, early adoption challenges, security concerns, and the future of AI tool integration at MCP Night 2.0.

At MCP Night 2.0, Theo from Anthropic and Dimitri from OpenAI shared insights on MCP's origins, current adoption challenges, and future direction. Despite being competitors, both companies collaborate on MCP as an open standard for AI tool integration.

At MCP Night 2.0, we had the unique opportunity to bring together representatives from two of the most influential AI labs: Theo from Anthropic and Dimitri from OpenAI. Despite being competitors in many areas, both companies have found common ground in their commitment to the Model Context Protocol (MCP) as an open standard for AI tool integration.

The Origins Story: From Copy-Paste to Protocol

Theo from Anthropic shared the genesis of MCP, painting a picture familiar to anyone working with AI models about a year ago: "People were constantly just copy and pasting context back and forth between things outside the model to the context window itself."

The solution seemed obvious in hindsight. Instead of building one-off integrations every time someone wanted to connect their AI model to Sentry logs, Slack, or other workplace tools, why not standardize how models communicate with these integrations?

"If you standardized how the model would talk to the integration, it would make it really easy for anyone to go and build that integration," Theo explained. This insight became the foundation of MCP—enabling models to have the agency to reach out into the outside world.

OpenAI Jumps In: A Vote of Confidence

Dimitri from OpenAI described their decision to adopt MCP in March 2025, adding support across their agents SDK, ChatGPT, and full API suite. What caught their attention wasn't just the technical merit, but the community response.

"We saw the adoption kind of before the holidays and it really picked up after the holidays and we're like wow there is something here," Dimitri noted. While function calling had existed for connecting to APIs, "having a more standardized way that's more reusable made a ton of sense."

The fact that a genuine open standard was emerging, rather than another proprietary integration method, gave OpenAI confidence to invest deeply in MCP support.

Early Use Cases: Internal Tools and Microservices

Both companies are seeing strong adoption in similar areas:

Internal Agent Building: Teams are using MCP to create loosely coupled, microservices-style architectures where they can independently iterate and piece together tools from other teams.

Focus on Intelligence, Not Integration: As Dimitri put it, "You can just focus on the intelligence layer, the logic layer, the user experience—you can focus on everything that just isn't the integration anymore."

Internal Connectors: OpenAI has been "customer zero" for their own MCP investments, with ChatGPT connectors for internal search significantly speeding up employee workflows.

MCP and the Agent Revolution

When asked about how MCP relates to the booming field of AI agents, both panelists emphasized that every agent needs two fundamental things: actions it can take and context to inform those actions.

Theo framed it in human terms: "Think about your co-worker. What makes them so useful? It's that they have the ability to take actions, but they're taking the right actions because they have access to data and context and knowledge."

Dimitri pointed to an intriguing evolution: "The lines between what a tool is and what an agent is are really kind of starting to blur together." He envisions MCP clients calling into tools that aren't just hitting REST APIs, but are actually performing agentic workflows themselves—agents calling agents through MCP.

The Biggest Blockers: Security and Trust

Both panelists identified security as the primary barrier to wider MCP adoption. As Dimitri explained: "Data and tools are kind of a double-edged sword. If you want to build something useful with AI, you need to bring in your tools and your data... At the same time, you really need to have a lot of trust to pass that up over the wire."

The challenge is particularly acute for organizations looking to automate key workflows—the very data and tools they most want to connect are also their most sensitive assets.

Theo highlighted the multi-layered nature of the problem: "There's going to see a lot more people doing research around data exfiltration, prompt injections, like all the things that you hear about from a security perspective."

White Space: More Servers and Better Tools

When asked about opportunities for builders, the panelists had clear guidance:

More and Better Servers: "The simplest thing is just more servers," Theo emphasized. "There's not a ton of really really great servers yet." The ecosystem needs more iteration on tool design and servers that actually solve real problems effectively.

Simplified Developer Experience: Dimitri pointed to the complexity of managing multiple MCP servers: "It's just so hard... managing all those credentials, doing setups for individual users—there's just like so many steps involved."

Server Chaining: Looking ahead, Theo anticipates servers becoming more complex through chaining, moving beyond simple API wrappers to more sophisticated tool combinations.

Authentication: The Missing Piece

The discussion touched on a critical challenge: every demo at MCP Night 2.0 handled authentication differently. Some used OAuth, others bearer tokens, and some required no authentication at all.

This inconsistency represents both a challenge and an opportunity. As Theo noted, "There's opportunities in both like the spec itself... and then also like implementing the thing."

The complexity increases with agentic workflows, where you might have "an agent on behalf of a user on behalf of potentially their organization—levels of nesting" that traditional OAuth wasn't designed to handle.

Open Development: A Key Differentiator

One of the most striking aspects of MCP's development is its open governance model. Unlike many protocols developed behind closed doors, MCP is being built in public through GitHub discussions and Discord conversations.

"We always wanted to make it open from the beginning because otherwise you end up back in the proprietary ecosystem where each integration is just built one-off," Theo explained. The openness is so central to their approach that the MCP team uses Discord more than Slack and GitHub more than internal tools.

This commitment to openness was a significant factor in OpenAI's decision to adopt MCP. As Dimitri put it: "Seeing like a genuine commitment to openness... gave us a lot of confidence."

Internal Adoption: Leading by Example

Both companies are heavy internal users of their own MCP investments. At Anthropic, there's a company-wide mandate to use AI to speed up workflows, which has led to a proliferation of internal MCP tools as employees build solutions for their own needs and share them with teammates.

OpenAI has created a "positive feedback loop where the more we use them the more excited we get and the more we build," with internal connectors becoming essential productivity tools.

Development Tips: Focus on Tool Design

For developers struggling with MCP implementation, both panelists emphasized tool design as crucial. Common issues arise from overloading context windows or tools not being defined clearly enough.

Theo recommended treating MCP development like prompt engineering: "Write your eval what you want your server to do, go and like run the tools through those evals and then update the tools if your evals are failing."

OpenAI is investing heavily in evaluation support for tools, recognizing that you need to be able to mock and reproduce what agents are doing through MCP servers.

Favorite Features: Prompts and Resources

When asked about underexplored parts of the protocol, the panelists highlighted:

Prompts (Dimitri): The ability to expose what prompts to use with what tools creates a metadata layer that can provide tool-specific context and templates tailored to specific tools.

Resources (Theo): "There's so much you can do with resources... we've seen people using resources as a way to basically pass back like entire portions of file systems that the model can then RAG over."

Looking Ahead: Long-Running Tools and Sub-Agents

For the future, both panelists see the community driving much of MCP's direction, but highlighted specific areas of development:

Long-Running Tools: Anthropic is working on tools that can persist across multiple interactions, crucial for agentic workflows.

Sub-Agents: As agents engage in longer-running loops, the ability to create sub-agents becomes important. Current sampling features hint at this direction.

Better Ergonomics: OpenAI is focused on "paving the cow paths"—making it easier to adopt MCP on both client and server sides without requiring deep spec knowledge.

The Path Forward

The panel discussion revealed MCP as more than just a protocol—it's becoming the foundation for a new way of thinking about AI integration. Both companies see it as essential infrastructure for the agentic future, with security and developer experience as the key challenges to solve.

As Dimitri concluded: "If we can make the ergonomics of adopting MCP much easier on both the client side and the server side... I really see this becoming like the way to connect context to LLMs and really get magic out of the system."

The collaboration between these traditionally competitive companies on MCP suggests that the benefits of an open standard outweigh the advantages of proprietary approaches—a promising sign for the future of AI tool integration.

Watch the full panel discussion on YouTube and explore more MCP Night 2.0 content in our ongoing series.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.