Blog

A developer’s history of authentication

Explore the history of authentication from where it started over 60 years ago to where things might be going.


In just 60 years, digital authentication has evolved from basic passwords used only in elite government and academic settings to face-scanning infrared technology in the pockets of millions of everyday people (i.e., Face ID). How did we get here so quickly?

In this article, we will walk you through the history of authentication from a developer’s point of view:

  • 1960s: Passwords, encryption, and the dawn of digital authentication
  • 1970s: Asymmetric cryptography
  • 1980s: Dynamic passwords
  • 1990s: Public Key Infrastructure
  • 2000s: Multi-Factor Authentication (MFA) and Single Sign-on (SSO)
  • 2010s: Biometrics

Let’s start by discussing where digital authentication began.

The history of authentication: where it all began

The origin of computers is a topic of endless debate. When you break it down, computers are machines that compute, so there are many points where you could reasonably say, “There, that’s the first computer.” 

The prospect of defining an exact starting point gets even more complex if you include theoretical machines like Charles Babbage’s and Ada Lovelace’s analytical engine. For the sake of making this post article-length instead of book-length, let’s start with Alan Turing.

Alan Turing’s paper On Computable Numbers laid the groundwork for what many consider the first modern computers. In that paper, Turing proved that computing instructions stored in memory could be executed. 

After Turing’s paper was published, computer scientists got to work creating the first computer. Twelve years later, in June of 1948, the first electronic stored-program computer (named the Manchester Baby) ran its first program, and modern computing was born.

For many years after the Manchester Baby — also known as the Small-Scale Experimental Machine (SSEM) — came about, computers were limited to researchers and scientists. After all, computers were massive and known to fill entire rooms with tubes and wires.

A visual progression of authentication forms from the 1960s to 2010 and beyond.

By the early 1960s, some universities had a computer that was shared among all students for the use of performing calculations and research. Here is where the first form of digital authentication truly began.

1961: Passwords

MIT was one such school. Their big, slow, shared computer had multiple terminals. A program called the Compatible Time-Sharing System (CTSS) allowed students and researchers to share the computer. One student, Fernando J. Corbató, noticed that any user of the CTSS was able to access the files of any other user and, unsurprisingly, viewed this as a fundamental weakness in the system.

In 1961, Corbató implemented a rudimentary password program for the CTSS. And by rudimentary… we mean the system prompted the user for a password and then saved it into a plaintext file within the filesystem.

Passwords were a step in the right direction, but a user could easily find where the passwords were stored and access them. In fact, that’s exactly what happened when a PhD researcher named Allan Scherr wanted more than his allotted four-hour block on the MIT computer. His greed may have made him the first hacker.

Late 1960s: Password encryption

By the late 1960s, programmers knew that storing passwords in plaintext wasn’t going to cut it. Still, they had to solve the issue of providing system access to the user’s selected password to compare against the user’s entered password. This complicated problem was finally solved by a cryptographer named Robert Morris.

While working at Bell Labs, Morris made foundational contributions to Unix, including a password encryption scheme (based on work by Roger Needham) that determined hashes of passwords for user authentication. Essentially, the scheme used a key derivation function, which calculates a secret value that is easy to compute in one direction but extremely difficult to crack in the opposite direction.

A diagram of a key derivation function.
In this image, a key derivation function f with a trapdoor of t is formed by Gen. It is easy to generate f, but finding the inverse is exceptionally difficult until you are also given t.

Though Morris was deep into the world of cryptography, he was realistic about his expectations for computer security. He once said, "The three golden rules to ensure computer security are: do not own a computer; do not power it on; and do not use it.” Ironic, given the fact that his son became a felon for creating the first major computer worm to hit the Internet.

Early 1970s: Asymmetric Cryptography

The next step in the journey to robust user authentication was asymmetric cryptography (also known as public-key cryptography). To understand how it works and how it got established as a cryptographic method on computers, we need to go back to the 1800s.

In 1874, William Stanley Jevons wrote a book called The Principles of Science in which he wrote, “Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know.” What Jevons is saying is that the reader has no idea how to decode 8616460799 without a crucial bit of information from the person who derived the number. A “key,” if you will.

He essentially came up with public-key encryption way before computers were even a twinkle in a motherboard’s eye.

Jevons’ theory gives us a bit of insight into how asymmetric cryptography works. There are two keys — a public key and a private key. The public key is openly shared and acts as an identifier for the user. The private key is combined with the public key to create a digital signature, which authenticates the user.

The concept of public-key cryptography was further developed secretly by a UK government employee named James H. Ellis in 1970, though he never got it fully working. 

The real deal came three years later, in 1973 when Clifford Cocks (a co-worker of Ellis) developed an algorithm that was mathematically identical to what we now know as the RSA encryption algorithm. A year later, in 1974, Malcolm J. Williamson devised an algorithm similar to the Diffie-Hellman key exchange, which was a separate breakthrough in secure key exchange. 

These discoveries remained classified until after the independent development and publication of RSA by Rivest, Shamir, and Adleman in 1977.

Mid-1980s: Dynamic Passwords

As technology quickly advanced, the fallibility of passwords became more and more obvious. Users would choose easily guessable passwords or reuse the same passwords in multiple places. Eventually, as computers were built with more computing power, hackers could build programs to brute-force guess passwords. To combat this, computer scientists came up with dynamic passwords.

These passwords change based on variables like location, time, or a physical password update (like a FOB). They remove any risk of replay attacks and solve the problem caused when users have the same password in many places. Security Dynamics Technologies, Inc. was the first company to create FOB hardware with a one-time password (OTP) for authentication.

A physical RSA SecurID that generates one-time passwords.

Over time, there were two dynamic password protocols introduced:

  • TOTP = Time-based OTP where the uniqueness of the OTP is generated based on the current time.
  • HOTP = HMAC-based OTP where the uniqueness of the OTP is generated based on the hash of the previous password.

HOTP, in particular, is a foundational component of the Initiative for Open Authentication (OATH), which produces industry-wide standards for authentication.

It’s very common for dynamic passwords to be used in conjunction with regular passwords as a form of two-factor authentication (2FA). We’ll get into the history of multi-factor authentication (MFA) a little later, but it’s important to note that it did appear as early as the ‘80s. And if you think it’s annoying now, it used to be a lot worse!

Late 1990s: Public Key Infrastructure

Remember how we said that asymmetric cryptography was developed in the ‘70s but kept secret until 1997? Well, when the knowledge did finally go public, it was a game-changer. In the late ’80s, computer scientists continued the work started in the ’70s and made moves to standardize it through public key infrastructure (PKI).

One of the main catalysts for the standardization work (besides finally releasing it to the public) was the World Wide Web. By the 1990s, the Internet was no longer a tool hoarded by universities and governments. Even companies had a presence on the Internet, including the infamous Pets.com.

(Source)

With so much sensitive data online, she was beefing up authentication to know exactly who was accessing what was a must.

  • In 1986, a handful of U.S. government agencies (including the NSA) and 12 companies with an interest in computers developed specs for secure network communications. 
  • In the late ’90s, Taher Elgamal — an engineer at Netscape — developed the original Secure Sockets Layer (SSL) protocol, which led to the creation of TLS (Transport Layer Security), standardizing secure internet transactions.

Eventually, the official PKI structure was fleshed out to define specifically how to create, store, and distribute digital certifications. Every PKI must include:

  • Certificate authority (CA) = Issuer of digital certificates (including signing)
  • Registration authority (RA) = Verifier of identities requesting digital certificates
  • Central directory = Where keys are stored
  • Certificate management system = Structure for operations, such as accessing stored certifications
  • Certificate policy = Statement of PKI requirements

You can find many open-source implementations of PKIs, like OpenSSL and EJBCA.

2000s: Multi-Factor Authentication (MFA) and Single Sign-on (SSO)

We mentioned MFA a little earlier. It’s when you combine multiple types of authentication methods to verify someone’s identity, and it gained heavy traction and widespread adoption in the 2000s.

Over the years, more technologies were invented to help with MFA. For instance, instead of carrying around a FOB with an OTP, there’s an app for that (actually, there are many: Google Authenticator, Duo, and Authy, for example, none of which anyone seems to rate highly on the app store). Other forms of MFA include texting OTPs to users’ phone numbers or magic links.

A screenshot showing the mobile Google Authenticator application.
(Source)

Companies also began to do extra checks for added measures against security threats. For instance, have you ever gotten an email saying your account was accessed from a location or device that isn’t recognized? It could be that you’re on vacation, using a VPN, or recently clearing your cache (or nothing). This security email is just another step in verifying your identity.

Single Sign-on (SSO) was another major (and controversial) advancement made in the 2000s. SSO was formed from the idea that users are fundamentally not to be trusted with their passwords. 

After all, the passwords that people make up are usually not super secure, are used in multiple places, or are frequently forgotten. One study found that 42% of IT and security managers reported data breaches from user passwords being compromised, and 31% reported breaches caused by users sharing their passwords with unauthorized people.

With SSO, a trusted third party verifies that users are who they are so that individual sites don’t have to verify each set of credentials. Users log on to one website, and that site checks if an SSO provider has authenticated them. If not, it prompts them to log in; if it does recognize them, it gives them access. 

The problem is that the security of the companies in charge of SSO can be iffy. Take signing up for a Spotify account through your Facebook account as an example. If Facebook is breached, any account you use to log in with Facebook will also be compromised. When using SSO, it’s important to find a company you can trust with your data.

2010s: Biometrics

Biometrics use biological traits, such as fingerprint or face shapes, to authenticate users. Biometrics have been around for a while, but before the 2010s, they were only used for top security personnel (and super cool spies in movies). Then, in 2011, a fingerprint scanner was added to the Motorola ATRIX Android smartphone.

(Source)

Apple was a bit behind the times with fingerprint scanners. Their Touch ID came out in 2013, two years after Motorola’s Atrix (remember them?). But by 2017, Apple had phased out this popular feature for Face ID on the iPhone X, which scans the user’s face using 30,000 infrared dots — a significant leap in biometric security for mainstream consumers. We generally don’t think of this kind of stuff as biometrics, but it’s working with the same DNA as the sophisticated stuff you see in movies.

Now, it’s common to use your finger or face for authentication purposes when logging in to devices or completing digital purchases. 

Overall, biometrics are considered to be very secure. Stealing someone’s face is, generally speaking, difficult to pull off. That being said, there have been legal loopholes and technical backdoors that shake the foundation of biometrics as an authentication method.

The future of authentication

It’s hard to imagine anything more futuristic-sounding than a device scanning your face to authenticate you, but a new authentication technology is likely just around the corner. There’s talk around Silicon Valley of using heartbeat, gait, or even behavior for authentication in the near future. Some companies are also using less shiny, but perhaps more practical, methods of passwordless authentication like magic links or codes through email and SMS.

What’s next for you?

Authentication today goes far beyond just passwords; it extends to MFA, social logins, magic links, and SSO if you’re selling to enterprises. However, implementing these systems can swiftly become complex and tedious, prompting many teams to opt for authentication providers like WorkOS.

  • Get started with AuthKit: AuthKit, a customizable hosted UI, supports all these authentication methods and abstracts all the complexities of building your authentication system from scratch. Plus, it’s free for your first million users. 
  • Support every protocol: With OAuth 2.0 integrations to popular providers like Google and Microsoft, compatibility with every major IdP, and full support for custom SAML/OIDC connections, WorkOS can support any enterprise customer out of the box.
  • Pricing that makes sense: Unlike competitors who price by monthly active users, WorkOS charges a flat rate for each company you onboard — whether they bring 10 or 10,000 SSO users to your app.

Start building with WorkOS.

In this article

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.