In this article
April 15, 2026
April 15, 2026

The AI factory for open models: Rob Ferguson on Fireworks AI at HumanX 2026

Rob Ferguson of Fireworks AI explains why open models are catching up to frontier closed-source AI, and why data (not architecture) is the real moat.

Fireworks AI helps companies build, train, and scale open models. At HumanX 2026, Michael Grinich sat down with Rob Ferguson—who joined just two months ago and has already launched two products—to talk about what happens when companies get serious about owning their AI.

Ferguson describes Fireworks as an AI factory. "You put in what you want, take an open model, put in your data, and get your own model where you can download the weights and own your own AI." The platform serves everyone from cursor-style coding tools running production workloads to enterprises that want full ownership of their model weights.

Cost is the gateway drug

Most companies start with frontier closed-source models, often subsidized by credits. But when they need to prove their business model at scale, the economics shift fast. Open models become compelling not just on price, but on strategic terms.

Ferguson described a clear progression: companies get "open model curious" because of cost and performance, then realize they want to own something durable—something that can't be taken away.

"Are these models making me more average?" Ferguson asked. "The next model comes out and everybody has the same capabilities. Or does the next model come out and it's better for my data—giving me more of an edge?"

That's the tension. Closed-source frontier models give every customer the same capabilities. Open models let you build something differentiated.

The performance gap is narrowing

When asked how far behind open models lag, Ferguson was blunt: "If you're happy to ask how many months it is, then I'd ask how much you're really getting from the other anyway."

For most coding tasks, models like Qwen 2.5 perform well—and are faster and cheaper to run at scale. The frontier gap matters most for highly specialized tasks, and that gap keeps shrinking.

There aren't really any secrets anymore

Ferguson offered a candid take on why open models catch up so quickly. The ideas are well-shared. The training data comes from many of the same sources. Researchers move between organizations constantly.

"All the model providers are training with the same data from the same labelers," he said. "There aren't really any secrets anymore."

He pointed to DeepSeek's open-weight releases as some of the most impressive engineering he's seen, and noted that whatever advantages existed at the point of o1's release were narrowed quickly by open efforts. The pattern is consistent: a frontier breakthrough lands, and the open-source community closes much of the gap within months.

Data is the real moat

If model architecture isn't the primary differentiator, data is.

Ferguson believes companies like Anthropic have invested in finding richer, more diverse data sources—particularly enterprise data behind firewalls and specialized code repositories. That's where the edge lives.

"95% of data lives hidden behind enterprise firewalls and in applications," he said. "The more you can get access to that data richness, the better the model."

This is also the core argument for why enterprises should care about open models in the first place. If your proprietary data is the moat, you want a model you can fine-tune on that data and own outright—not one where your competitive advantage risks being diluted into someone else's training pipeline.

A reflection of government structures

Ferguson drew a connection between AI development patterns and government policy.

California's prohibition on non-compete agreements enables the rapid talent movement that drives Silicon Valley innovation. China's different approach to copyright enforcement enables training on broader datasets. These aren't incidental factors—they shape the trajectory of model development itself.

"What we're seeing in the world of model development are the emergent properties of government structures," Ferguson observed.

The technical space doesn't exist in a vacuum. Regulatory environments, labor law, and IP policy are upstream of the models themselves.

This interview was recorded at HumanX 2026 in San Francisco.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.