In this article
May 7, 2026
May 7, 2026

How AI helps translate one OpenAPI spec into seven idiomatic SDKs

WorkOS uses AI-powered code generation to build and maintain SDKs across multiple languages from a single OpenAPI spec.

Explore with AI
Open in ChatGPT
Open in Claude
Open in Perplexity

If you ship an API, you ship SDKs. At a minimum you have a Node SDK and a Python SDK; the moment you have enterprise customers, you add a Java SDK, a .NET SDK, a Go SDK, and more.

An SDK is more than just a typed wrapper around HTTP calls. It autopaginates resources so the caller doesn't have to handle it. It retries requests with jittering backoffs so a flaky network doesn't impede tasks. SDKs transparently exchange auth tokens, surface errors in the language's native exception, and gives callers caller compile-time confidence that the request shape is correct. The whole point is that a customer can integrate an API in a few lines of idiomatic code in their app, and never worry about the moving parts underneath.

Keeping all seven of our SDKs consistent used to be a manual task. Every new API endpoint meant updates to seven repositories. Each language has different naming conventions, type systems, package managers, linting rules, and expectations around what “idiomatic”  code looks like. This introduced a lot of repetitive engineering work, and worst of all, drift. One SDK exposed a parameter another forgot. Types would diverge.  Docs would go stale.

We needed a single source of truth that could produce well-tested SDKs in every target language without requiring a team of polyglot engineers to babysit each one. So, we built a generation pipeline that generates them from a single OpenAPI spec, with AI doing the heavy lifting on language-specific translations.

Why not existing tools?

To their credit, autogeneration platforms have done incredible work in this space. But above all else, we're an API company at our core. Our OpenAPI spec is a first-class product artifact, not an afterthought.

Generic autogeneration tools optimize for breadth: any spec, any language, any opinion. Their job is to be general, which means they cannot match your error model, your docstring style, your pagination shape. We need to own the pipeline end to end, so we can extend it as our API evolves, without waiting on a vendor's roadmap or relying on their solutions built around standards that don't match our own.

The toolchain: oagen and oagen-emitters

The core of our approach is two open source repositories that work together.

oagen is our OpenAPI generator framework. It parses our OpenAPI specification and produces a structured data model. It resolves $ref references, normalizes schemas, and produces a clean intermediate representation (IR) of our entire API surface.

oagen-emitters contains the language-specific emitters. It consumes oagen's IR and produces idiomatic code for target languages, complete with proper type annotations and docstrings.

Our SDK build pipeline runs from our openapi-spec repository. Whenever the spec changes, the generation pipeline kicks off, producing updated SDK code across all target languages. The spec is the source. The IR is the language-independent state. The emitters are the translators.

IMAGE: Left-to-right pipeline flow: a single document-shaped node on the far left representing a specification, feeding into a wide processing node in the center with a gear motif, which then fans out six thin arrows to six distinct smaller nodes on the right — each a different shape and color representing different language targets. Dark background with teal and amber highlights.

What the intermediate representation buys us

One of the less obvious wins in this architecture is the IR itself. Rather than passing raw OpenAPI YAML to each emitter, oagen produces a normalized, fully-resolved data model of the API surface. This matters for a few reasons.

First, it insulates emitters from OpenAPI's quirks. Real specs contain indirection and composition that make direct code generation awkward. oagen handles all of that once, centrally, so each emitter can work with clean, predictable data structures.

Second, it makes the IR a stable contract. Emitters depend on oagen's data model, not on the raw spec format. When OpenAPI evolves or our spec gains new patterns, we update the IR layer once rather than patching every emitter.

Third — and this is where it gets interesting for AI-assisted generation — the IR is structured data that Claude can reason about precisely. When we ask Claude to extend an emitter or translate a new API pattern, it's working with clean, well-typed objects rather than trying to parse YAML and make sense of reference chains and schema composition. That scoping improves output quality considerably.

Here's a representative slice of what that IR looks like, as a resolved operation ready for an emitter to consume:

  
{
   "name":"createOrganization",
   "method":"post",
   "path":"/organizations",
   "description":"Create a new organization.",
   "requestBody":{
      "type":"object",
      "properties":{
         "name":{
            "type":"string",
            "required":true
         },
         "external_id":{
            "type":"string",
            "required":false
         }
      }
   },
   "response":{
      "type":"Organization"
   }
}
  

The same shape in OpenAPI is much more verbose!

By the time an emitter sees this, all the hard work is done. The emitter — and Claude — can focus entirely on how to render this in the target language, not on unwinding OpenAPI indirection.

The structure of an emitter

Each emitter in oagen-emitters follows a consistent pattern. It receives the parsed OpenAPI data from oagen, walks the operations and schemas, and produces files that slot into the target SDK's directory structure. The emitters aren't just template substitutions: they make real decisions about how to represent API concepts in each language.

The Python emitter knows that an optional query parameter is a keyword argument with a None default. The Go emitter knows it's a pointer field on a struct. The Java emitter knows it's a fluent builder method. The emitter encodes each language's idioms into code. When new information shows up in the IR, every emitter renders it in each language's coherent pattern.

IMAGE: A three-column layout showing three abstract code-block shapes side by side, each in a different color — teal, amber, and violet. Each block has slightly different internal line patterns suggesting different code structures, but all three originate from a single glowing node above them connected by thin arrows. Minimal dark background.

(You can build your own emitter!)

Because oagen's IR is a stable, documented data model, it's not just for us. If you have a target language we don't yet support — or your own API needs reliable SDKs — you can write your own emitter against oagen's output.

The emitter interface is intentionally minimal: consume the IR, walk the operations and schemas you care about, emit files. You bring the language knowledge; oagen brings the parsed, normalized API surface.

Where Claude fits in

Claude is not the SDK's source of truth, and it doesn't invent API behavior.  Its job is far narrower.

Inside the .claude/ directories of these repositories, we've configured Claude as a development partner with specific skills and instructions tailored to the codebase.

The Claude configuration files define how AI assists with the generation process. Rather than asking a general-purpose LLM to "write me a Ruby SDK," we give Claude deep context about our specific patterns: our naming conventions, our error handling approach, our serialization strategy, and our testing requirements. The skills files act as persistent prompt engineering — they encode institutional knowledge about how WorkOS SDKs should look and behave.

Here's an excerpt from the skills file in oagen-emitters that shows how this works in practice:

  
## SDK Conventions
    
When generating or modifying SDK code, follow these rules:
    
### Naming
    
- Operation names map directly from `operationId` in the IR, converted to the target language's casing convention (camelCase for Node/Java, snake_case for Python/Ruby, PascalCase for Go exported identifiers).
- Resource names (e.g. `User`, `Organization`) are always singular and PascalCase in every language.
    
### Error handling
    
- All SDK methods must propagate errors as the idiomatic error type for the target language (thrown exceptions in Node/Python/Java/Ruby; returned `error` values in Go).
- HTTP 4xx responses from the API map to typed `WorkOSError` (or equivalent) instances that carry the status code and the `code` field from the JSON body.
    
### Pagination
    
- Any operation with `before`/`after` cursor parameters should generate a paginated list method that returns both the `data` array and a `ListMetadata` object with `before` and `after` cursors.
- Emitters for languages that support it should also generate an async iterator variant that handles pagination automatically.
    
### Docstrings
    
- Docstring content comes verbatim from the IR's `description` fields.
- Format for the target language: JSDoc for Node, reStructuredText for Python,  Javadoc for Java, GoDoc for Go, YARD for Ruby.
- Always document every parameter individually; do not collapse them.
  

This is a meaningful distinction from typical AI code generation. We're not generating code from scratch with zero constraints. We're using AI within a tightly scoped system where the OpenAPI spec defines what to generate and the emitter architecture plus Claude's skills define how to generate it. The AI operates within constraints that enforce consistency — the spec fixes the API surface, and the skills fix the language-specific conventions. Claude's output always stays within known boundaries.

SDKs that agents can actually use

A well-generated SDK with comprehensive, accurate docstrings is dramatically more useful to an AI coding agent than one that's hand-maintained and inconsistent. When a developer (or an agent) is trying to figure out how to call a WorkOS API from their codebase, they're often relying on their IDE or an AI assistant to surface the right method, explain its parameters, and show them what to expect back.

Our SDKs are designed with this in mind. Because every method's signature, parameter descriptions, and return types trace back to the same OpenAPI spec, the information is consistent and machine-readable. An agent implementing against our Node SDK gets the same conceptual model as one working with our Python SDK. That consistency makes it easier to write agents that can switch between SDK languages or help users who are working in an unfamiliar stack.

What this means for SDK quality

The result is consistency. When we add a new endpoint to our API, every SDK gets it at the same time with the same behavior. When we fix a bug in how we serialize a particular type, the fix propagates everywhere. Our SDKs are never more than one generation run away from parity with our API spec.

It also means our Developer Experience Engineers spend their time on the hard problems — optimizing for agents, handling edge cases that require human judgment, and improving the generation pipeline itself — instead of coding boilerplate across seven repositories.

Try it yourself

Both oagen and oagen-emitters are open source. If you're maintaining multiple client libraries and feeling the pain of keeping them in sync, the approach is worth studying even if your stack looks different from ours. The key insight isn't any specific tool; it's that a single source of truth combined with AI-assisted, language-aware code generation eliminates an entire class of SDK maintenance problems.

Our OpenAPI spec is public too. Take a look at the .claude/ directories in each repo to see how we've structured the AI-assisted workflow. The skills files are a practical example of how to give an LLM useful, scoped context instead of hoping it figures out your conventions on its own.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.