In this article
October 28, 2025
October 28, 2025

Security in the Age of AI: Old Problems Meet New Risks

Opal Security moderates a panel exploring how AI is simultaneously transforming security challenges and solutions—and why everything old is new again.

This post is part of our Enterprise Ready Conference 2025 series, highlighting the key demonstrations and announcements from the event. Check out our full event recap to see how enterprise AI infrastructure is evolving.

When Umaimah Khan from Opal Security opened the morning's first panel discussion on AI and security, she framed a question that revealed the complexity of the moment we're in: Are companies converging or diverging when it comes to securing AI systems versus using AI to build security?

The answer from three practitioners building at the intersection of AI and security was more nuanced than a simple "both"—and more revealing about the state of enterprise security in 2025.

The panel brought together Ahmad Nassri, CTO of Socket and former CTO of NPM; Mokhtar Bacha, founder and CEO of Formal; and Alan Braithwaite, co-founder and CTO of RunReveal. Together, they represented different layers of the security stack, from supply chain protection to least privilege enforcement to security data platforms.

What emerged from their conversation was a picture of an industry grappling with entirely new attack surfaces while simultaneously rediscovering that fundamental security principles still apply—just at dramatically different speeds and scales.

You can watch the full panel here: 

The Speed Problem

Alan from RunReveal articulated a critical distinction that set the tone for the discussion: there's "AI for security"—agents that help with secure code reviews or detection and response—and "security for AI"—preventing prompt injections and ensuring agents don't make unexpected tool calls.

Both domains are evolving rapidly, but Alan noted there's still work to be done on the security-for-AI side. "I don't think there's really any good solutions out there that can really help prevent things from the start today," he said, adding with a touch of humor that he hopes "everybody's being really considerate when they build these tools so that they don't just introduce Skynet prematurely."

Beyond the technology itself, it's the speed at which AI operations now happen that introduces an entirely new dimension of risk.

Ahmad made the point viscerally: "A human has to write some code, that takes time. Have to open up a request, another human has to review it, that takes time. And then somebody has to push it to production. That's all human time, but that could literally happen in microseconds in an AI world."

The security controls we've built over decades—code review processes, change management procedures, approval workflows—all assume human timescales. When an AI agent can generate, review, and deploy code in microseconds, those traditional guardrails become either bottlenecks that defeat the purpose of automation or irrelevant friction that gets bypassed.

This creates a fundamental tension: how do you maintain security controls that were designed for human-speed operations when the operations themselves now happen at machine speed?

Everything Old Is New Again

One of the most striking moments in the discussion came when Umaimah asked about security practices that stand the test of time in this new environment.

Ahmad's response was immediate: "Everything that's old is new again. Authentication. Your endpoint security."

The panelists converged on a counterintuitive insight: the fundamentals haven't changed, but the implementation context has shifted dramatically. Authentication still matters. Authorization still matters. Audit trails still matter. The difference is that these controls now need to operate in environments where agents are making decisions and taking actions autonomously.

Mokhtar reinforced this from the identity perspective, noting that "every organization have a mandate to use AI" today. This organizational imperative means security teams must adapt: "If a security team wants to be able to secure AI in their organizations, they necessarily need to be using AI to understand it in the first place."

Modern security teams aren't choosing between traditional controls and AI-powered tools. They're implementing both simultaneously, using AI to enforce and monitor the same fundamental principles that have always mattered.

But that doesn't mean abandoning human oversight. Ahmad emphasized that even when using AI for security at scale, "we still want to do the human in the loop process. We still want to make sure the quality and what's being produced" meets standards. The goal isn't to remove humans from security—it's to augment their capabilities so they can operate at machine speed and scale.

Using AI to Solve AI Risks

When the panel discussed how to actually implement security in AI-enabled environments, a pattern emerged: the answer often involves using more AI to solve the problems AI creates.

Ahmad described how Socket uses AI to scan every version of every package ever published to every open source registry. "There's no way we can do that as human beings at scale, and the fact that we're able to do that with AI is an advantage."

But he was quick to add nuance: "You don't just use one AI model and one shot your question and then rely on the answer. You kind of do like a best of three kind of classification."

This approach—using multiple AI models to verify each other's work, creating a form of AI-based consensus mechanism—represents a new security pattern. It's relatively cheap to run four or five verification passes with different models to get an extra layer of confidence, probably cheaper than running alternative verification systems or relying on human review at scale.

The trade-off is accepting that AI-based verification isn't perfect, but it can be good enough at a scale that no other approach can match.

The New Attack Surfaces

Alan raised what might be the panel's most provocative prediction: "I think we are going to see an agentic worm in the next couple years."

His reasoning was straightforward: as more agents communicate with other agents, there will inevitably be vulnerabilities where one agent can tell another to run a prompt, which then spreads the prompt further. It's the computer worm reimagined for the age of autonomous agents.

This gets at something deeper than just a new type of malware. When agents have the ability to call tools, read emails, post to social media, and interact with other systems on a user's behalf, the blast radius of a successful attack grows exponentially.

The traditional security model assumes a human in the loop who will notice something strange before too much damage occurs. What happens when the loop operates at machine speed and the "something strange" propagates to other agents before any human sees it?

The MCP Question

An audience member asked how organizations should set guidelines for Model Context Protocol (MCP) usage, given that it could be a major attack vector if someone gets targeted.

Alan described a framework called CAMEL that enables selectively enabling and disabling different MCP tools based on previous tool calls. "The idea being as you're plugging in these tools into an agent, it can only take certain paths through the tool chain."

He gave a concrete example: "If you have an EA reading your emails, then that EA shouldn't have access to post to Twitter right after."

This concept of contextual tool access—where available capabilities change based on what the agent has already done—represents a new security pattern. It's more sophisticated than simple role-based access control because it's dynamic and context-aware.

Umaimah noted it's reminiscent of "old school financial segregation of duty systems that are finally making a comeback." The principle that no single entity should have end-to-end control over a sensitive process isn't new—it's just being reinvented for the age of autonomous agents.

Ahmad connected this back to established security practices: "If you replace the word MCP with software and applications, you know, everybody who works in a controlled environment, you can't just install any app on your laptop. You have to go through the IT tool, the MDM device management, all these type of things."

The underlying principle is the same—controlled software deployment with centralized logging and monitoring. MCP is just the latest wave of software deployment requiring these controls.

The Verification Technologies Question

An audience member named Zoe raised a more technical question about verification approaches: beyond just tracking logs, what about using decentralized networks like blockchains, trusted execution environments (TEEs), or zero-knowledge proofs to actually verify AI security?

Ahmad's response highlighted the pragmatic trade-offs involved. While blockchain and TEEs are interesting technologies, he pointed out that "it's relatively cheap to do four or five or six runs of verifications with different models just to get that extra layer of confidence"—probably cheaper than running a blockchain network or implementing complex cryptographic systems.

But Umaimah from Opal offered a nuanced perspective on why some of these advanced verification technologies haven't gained traction yet. The challenge with formal verification methods for AI systems is that "the policies themselves are very ambiguous because it's not totally clear what is the parameterized policy."

In other words, you can't formally verify a policy if you don't have a clear, mathematical way to express what that policy should be in the first place.

This is particularly challenging with agents and identities, where the access control rules are often contextual and dynamic. "We're still sort of standardizing the language," Umaimah explained—referring to the need for domain-specific languages (DSLs) that can express agent guardrails in ways that are both human-readable and formally verifiable.

Mokhtar agreed, suggesting that emerging DSLs for agent guardrails might finally bring some of these verification technologies to the mainstream. Technologies that were "considered niche in terms of encryption five years ago may actually finally come to the fore."

The panel noted that modern AI models are already using SMT (Satisfiability Modulo Theories) solvers and other formal methods internally. As these mathematical verification approaches become more embedded in how AI systems work, they may naturally become part of how we secure them as well.

Building Security into Low-Code AI

An audience member from Asana raised a critical question about the WorkOS Studio demo shown earlier: with more people creating software who aren't security experts, how do we ensure the software that gets created is still secure?

Mokhtar's response cut through the complexity: "I actually think that the vendor who basically enables those people to create software is the one that's responsible in making sure that the software is secure."

He argued that this shift might actually increase average software security: "You'll have those abstractions that create software for you, and it's much easier to have controls earlier in the lifecycle of your software creation than when you're just relying on humans following best practices."

This represents an inversion of traditional security thinking. Instead of training every developer on security best practices and hoping they implement them correctly, you embed security controls into the platforms that generate code. The security model shifts upstream, from implementation-time to platform-selection-time.

It also places enormous responsibility on the vendors building these low-code and AI-powered development platforms. They're not just providing productivity tools—they're architecting the security posture of everything built on their platforms.

The First Security Hire

Near the end of the discussion, an audience member asked about how a security team's first hire should spend their initial 90 days, particularly at a company where a small team has landed enterprise deals.

Alan shared advice from his co-founder, who had been the first security hire at both Cloudflare and Segment: "Take a lay of the land, understand what the situation is before you take any action, but very much start with the basics. Make sure two-factor auth is enabled everywhere. Make sure that you've got visibility into what's going on."

But he emphasized something equally important: "Develop the relationships within the org that you actually need to get things done, because a lot of a security team's job is selling security internally."

Mokhtar reinforced this point even more strongly: "Evangelize, evangelize, evangelize. Especially if it's one person, there's no way to scale. Spend at least 70 percent of your time on just building champions in every team, in every function."

This advice reveals something fundamental about how security actually works in organizations. The technical controls matter, but security is ultimately an organizational capability that requires buy-in across functions. The first security hire's primary job isn't implementing tools—it's building the culture and relationships that make security sustainable.

Security Is Non-Negotiable

The panel closed with a question about pricing and positioning security products to enterprises already making significant AI investments.

After acknowledging that "pricing is hard," the panelists converged on a simple truth. Ahmad noted that "security ought to be more accessible to organizations," while Alan stated plainly: "Security is non-negotiable."

This dual message captured the underlying reality of the entire discussion. Everything else about AI in the enterprise—the models used, the features built, the productivity gains achieved—depends on having security fundamentals in place.

The companies building AI security tools aren't competing on whether security matters. They're competing on how to make security more accessible, more scalable, and more effective in environments that are evolving faster than any software category in history.

The Convergence Thesis

By the end of the panel, Umaimah's opening question had an answer: AI for security and security for AI aren't converging or diverging—they're becoming inseparable.

Organizations deploying AI need to understand how it works to secure it properly, which means security teams need to use AI tools themselves. At the same time, the security tools being built to protect AI systems are themselves AI-powered, using models to detect anomalies, verify code, and monitor agent behavior at scales humans can't match.

The traditional separation between the system being secured and the tools doing the securing has collapsed. In its place is a recursive loop where AI systems secure AI systems, monitored by security teams using AI tools to understand AI risks.

What remains constant through this transformation are the fundamentals the panel kept returning to: authentication, authorization, audit trails, least privilege, defense in depth. The implementation has changed radically. The principles are the same ones that have always mattered.

For companies building AI products and trying to understand what security means in this new context, that's actually good news. It means the problem isn't entirely novel. The tools and practices exist—they just need to be adapted for machine-speed operations and agent-based architectures.

As Ahmad noted at the start of the discussion, he used to joke that as CTO of NPM he was "part of the problem" in software supply chain security. Now, at Socket, he's "part of the solution."

That shift from problem to solution is the opportunity in front of every company building AI infrastructure today. The attack surfaces are new, the speeds are unprecedented, and the complexity is daunting. But the path forward is clear: apply the fundamental security principles that have always mattered, use AI to operate them at the scale and speed the new environment requires, and build the organizational culture to sustain it all.

Watch more panels and sessions from Enterprise Ready Conference 2025 in our full event recap.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.