Latacora is security for startups without the unicorn hire
A conversation with Laurens Van Houtven from Latacora
When startups try to hire their first security person, they're asking for a unicorn. The job description reads like a wish list for ten people: application security, cloud security, on-call incident response, and oh by the way, be a great manager because this team will grow to twelve. Latacora exists because that unicorn doesn't exist.
Latacora builds security practices for startups. They invest in companies—including WorkOS—and become part of their security teams for years at a time. The model is designed to give startups access to deep expertise they could never afford to hire full-time.
The fractional expert model
The core insight is that security requires many different specializations. Cryptography. Kubernetes. AWS CloudTrail. Incident response. No single person masters all of these, but a startup might need each of them for fifteen minutes over the course of a year.
Latacora solves this by having one person who understands the company's overall security posture, supported by a deep bench of specialists. Need to talk to a cryptography PhD about a SAML implementation? That conversation is available when you need it.
Embedded for the long haul
Latacora prefers long-term engagements. They've worked with WorkOS for four or five years. This matters because the most impactful security work isn't finding individual bugs—it's shaping architecture before bugs happen.
Laurens (LVH) gives an example: a client made a decision years ago that meant they'd never have a user-generated content domain. They've been chasing cross-site scripting vulnerabilities ever since. The fix was architectural, not tactical. If Latacora had been there at the design stage, one conversation could have prevented years of bug-hunting.
AI changed everything
The watershed moment for Latacora was Claude 3.5 Sonnet. AI became good enough to democratize access to their internal tools—systems that previously required writing Clojure programs in Emacs.
They've built tools that capture a historical graph of all cloud resources and metadata across AWS, GitHub, Okta, and other systems. The query language was powerful but esoteric. Now, thanks to LLMs, anyone on the team can query that data in natural language. Even the experts use the LLM because it's faster.
The three fronts of AI security
LVH splits AI security into three categories: AI of the company (securing AI features in the product), AI for the company (employees using ChatGPT and Gemini), and AI against the company (attackers using AI for phishing and social engineering).
That last category has changed dramatically. The floor on phishing quality has risen dramatically. What used to be obvious Nigerian prince scams are now sophisticated campaigns. Latacora's email security recommendations today are completely different from five years ago.
This interview was conducted at AWS re:Invent 2025.