In this article
January 29, 2026
January 29, 2026

Why authentication doesn't need to stay local: The new data residency pattern

How OpenAI, Slack, and GitHub are splitting architectures to keep sensitive content in-region while routing identity globally, and why most enterprises accept the trade-off.

Enterprise “data residency” used to be a single checkbox: where your data is stored at rest. That model is breaking (in a good way) because modern SaaS, especially AI products, process a lot of customer content in ways that matter just as much as storage.

OpenAI’s latest step is a clean example of the new pattern: keep customer content stored in-region, and now also keep model inference in-region for Europe (EEA + Switzerland), while some control-plane functions (notably authentication) may still route through the US. OpenAI originally introduced EU data residency in February 2025, and expanded it to include in-region GPU inference for Europe in an update published January 16, 2026.

For industry observers, this wasn’t surprising. Slack established the same pattern years earlier, explicitly stating that even with data residency enabled, login requests are sent to the U.S.

The takeaway: instead of treating all data equally, vendors are increasingly splitting their architectures into two buckets:

  • High-volume, sensitive content that must stay local, and
  • Low-volume control-plane operations (like authentication) that can be processed globally.

Authentication and Single Sign-On, despite touching user data, is increasingly carved out as acceptable to handle across regions. This isn’t just OpenAI making an exception, it’s how mature SaaS companies balance compliance reality with operational constraints.

Why the old approach no longer works

For years, the data residency conversation was binary: either you built fully localized deployments for each region, or you accepted that your data would be processed globally. Both approaches create problems.

Full localization is expensive and operationally heavy: you’re effectively maintaining multiple “full copies” of your platform (storage, application services, backups, incident response patterns) and you lose the ability to use cross-region failover as freely.

The global approach, meanwhile, can lock you out of key markets. In 2021, India’s central bank restricted Mastercard from onboarding new domestic card customers for noncompliance with local data storage rules. And in 2023, Meta was fined €1.2B for unlawful EU-US personal data transfers under GDPR.

The industry needed a middle path. That’s exactly what selective residency provides.

The three layers of modern data residency

The new approach starts with a simple question: which data actually needs to stay local?

Think of a product as having three “planes”:

  1. The customer content plane: This is the data customers care about most: prompts, responses, files, conversations, artifacts. OpenAI’s data residency framing is explicitly about keeping customer content in-region. Slack describes a similar “content-heavy” set of data types tied to residency (messages, files, search index).
  2. The processing plane: This is where computation happens: search indexing, content processing, and for AI products, model inference. OpenAI’s January 2026 update is notable because it expands residency from “where data lives” to “where prompts and responses are generated,” by offering in-region GPU inference for eligible customers in Europe.
  3. The control plane (identity, billing, telemetry, org metadata): This is the machinery that runs the service: authentication flows, account metadata, usage metrics, diagnostics. Many vendors keep pieces of this plane centralized because it supports reliability, security operations, and consistent global product behavior.

So when you send a prompt to ChatGPT Enterprise with EU data residency enabled, both the storage and the GPU-intensive work of running the model happen in EU data centers. Your prompts and completions never leave the region. But when you sign in, that authentication request routes through OpenAI's global infrastructure.

GitHub's recent EU data residency announcement takes a similar tack. Your code repositories, pull requests, and issues can be stored in the EU. But the platform's global namespace and certain integrations may still involve cross-border data flows.

Slack’s blueprint for selective residency

Slack deserves attention because they’ve been unusually explicit about the split. Slack’s data residency documentation distinguishes between the kinds of customer data stored in the selected region and categories that may be stored outside it.

And their Transfer Impact Assessment white paper spells out the login carve-out plainly:

"When Users log in to the Customer's Slack environment, login requests will be sent to the U.S. The authentication process redirects the User to the U.S. data center for the duration of the active session."

This transparency is valuable because it shows customers exactly what they're getting. Your messages stay in Frankfurt, but when you log in, that request goes through US infrastructure.

The fact that Slack's architecture team spent nearly two years rethinking their system to enable this (dealing with complex challenges like how German and Chicago teams could share channels while keeping data separated) demonstrates that selective residency isn't a shortcut. It's a deliberate architectural choice that balances compliance with operational reality.

Vendor Launch Date In-Region Crosses Borders
Slack Dec 2019 Messages, files, search Member profiles, auth, analytics
Airtable Mar 2024 Base data, cell content Analytics, support logs, auth
GitHub Oct 2024 Code, PRs, issues Some integrations, platform features
OpenAI Feb 2025 (+Jan 2026 inference) Storage + GPU inference Auth, CPU processing, routing

Why identity lives outside the data plane

The authentication exception is particularly interesting because SSO touches personal data: usernames, email addresses, corporate identities. By traditional thinking, this should be the most carefully protected information.

So why is it acceptable for this data to cross borders when message content isn't?

The answer comes down to volume, technical complexity, and practical risk assessment:

  • Volume: Authentication happens when users log in. Maybe once a day, or even less frequently if sessions persist. Compare that to the constant stream of messages, file uploads, or AI prompts that represent the actual work being done in the application. The data volumes differ by orders of magnitude.
  • Complexity: Authentication is also architecturally central in ways that make regionalization disproportionately difficult. Your SSO provider needs to maintain relationships with dozens or hundreds of identity providers, each with their own configurations, certificates, and protocols. Replicating this infrastructure in each region means maintaining separate SAML configurations, different OAuth integrations, and isolated user directories that somehow need to stay synchronized.
  • Risk assessment: From a compliance perspective, authentication data is also relatively low-risk. An email address and login timestamp doesn't tell you what someone is working on, what sensitive business information they're handling, or what strategic decisions they're making. It's metadata, not content.

The pragmatic argument many enterprises accept: if you can keep the overwhelming majority of sensitive content in-region, carving out authentication can be a reasonable trade-off.

When selective residency isn’t enough

But "acceptable to most" isn't the same as "acceptable to all."

Some organizations treat any cross-border routing as a dealbreaker, including:

  • Highly regulated financial institutions with strict supervisory expectations
  • Government/defense contractors with sovereignty requirements
  • Organizations operating under strict post-Schrems II risk postures, where any US exposure triggers deeper review

For these customers, “content stays local” is not sufficient if identity and session handling still cross borders.

Why most vendors choose pragmatism

For SaaS vendors, the decision comes down to market coverage versus operational complexity.

Selective residency (localizing content and, increasingly, processing like inference, while keeping some control-plane systems global) unlocks a large portion of the enterprise market without forcing every component into per-region duplication.

But it does mean some deals will demand more: on-premises deployment, private cloud, or dedicated regional control-plane infrastructure.

For buyers, the important shift is that you can’t just ask, “Do you support EU data residency?” You need to ask:

  • What data stays in-region vs crosses borders?
  • What features create exceptions (third parties, integrations, etc.)?
  • What legal mechanisms protect transfers when they occur?
  • What exactly happens during authentication, and where?

The vendors that win these conversations are the ones that document the split clearly and consistently.

What this means going forward

For software vendors designing new products, the selective residency pattern creates interesting architectural decisions. If you're accepting that authentication can cross borders while content stays local, you gain significant flexibility in choosing infrastructure providers.

Authentication infrastructure providers like WorkOS illustrate this point. A SaaS vendor could build an application where customer data and processing happen in EU infrastructure, while authentication flows through WorkOS's US-based service. This mirrors exactly how Slack and OpenAI architect their solutions: the high-volume, sensitive content stays in-region, while the low-volume authentication requests cross borders.

This works because WorkOS operates on the same principle: authentication metadata (login attempts, session tokens, SSO configurations) represents a tiny fraction of data volume compared to the actual customer content being protected. For most enterprise customers, this trade-off is acceptable.

However, WorkOS also illustrates the market stratification happening in enterprise software. While their cloud service fits the pragmatic pattern perfectly, they also offer on-premises deployment for customers who can't accept any cross-border authentication flows. Each on-premises customer gets their own isolated WorkOS environment with dedicated API keys, enabling vendors to serve both the pragmatic majority and the uncompromising minority with the same underlying infrastructure provider, just deployed differently.

Final thoughts

The data residency conversation has matured from "everything must stay local" to a more nuanced understanding of what actually matters. By categorizing data by risk and volume rather than treating it all equally, vendors like Slack, OpenAI, and GitHub have found a middle path that satisfies most enterprise customers without the operational burden of full localization.

Authentication crossing borders has become an accepted exception, not a dealbreaker. For software vendors building with this pattern, choosing infrastructure partners that understand these trade-offs becomes critical; partners who can serve both the pragmatic majority and provide paths for customers who need more.

WorkOS is exactly that partner. Sign up today.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.