In this article
April 30, 2026
April 30, 2026

Picking a password hash: A developer's guide to argon2, bcrypt, and scrypt

Three algorithms compared, a clear default, and the tradeoffs that should make you pick something else.

Explore with AI
Open in ChatGPT
Open in Claude
Open in Perplexity

If you're storing passwords, you already know you shouldn't be storing them in plaintext. You probably also know not to reach for MD5 or SHA-256. Those are designed to be fast, and "fast" is exactly what you don't want when an attacker is trying billions of guesses per second against your leaked database.

That leaves you choosing between a small set of purpose-built password hashing algorithms. The three that come up over and over again are bcrypt, scrypt, and Argon2. They all do the same job (turn a password into a verifier that's painful to brute-force), but they take meaningfully different approaches, and the right pick depends on your stack, your constraints, and how recently your codebase was written.

This is a practical look at how each one works, where they shine, where they don't, and how to actually decide.

What a password hash is supposed to do

Before comparing implementations, it's worth being precise about the job. A good password hashing function should:

  1. Be one-way. Given the output, you shouldn't be able to recover the input.
  2. Use a unique salt per password. This kills precomputed rainbow tables and ensures two users with the same password get different hashes.
  3. Be deliberately slow, in a way you can tune. A "work factor" or set of cost parameters lets you scale difficulty over time as hardware improves.
  4. Resist hardware acceleration. Modern attackers don't use CPUs; they rent racks of GPUs or build ASICs. A hash that's only CPU-expensive falls quickly to that.

The first two are table stakes. Every algorithm in this article handles them. The interesting differences live in points three and four: specifically, how each algorithm imposes cost on an attacker, and how well that cost holds up against parallel hardware.

bcrypt: The elder statesman

bcrypt was published in 1999 by Niels Provos and David Mazières, built on top of the Blowfish block cipher. It's been the default in countless web frameworks for two decades, and that ubiquity is one of its biggest strengths.

Internally, bcrypt does something clever: it uses an "expensive key setup" version of Blowfish where the key schedule itself is iterated. A single tunable cost factor (sometimes called the work factor) controls how many rounds of this setup happen. Each increment of the cost factor doubles the work, so going from cost 10 to cost 12 makes hashing four times slower, and four times more expensive for an attacker.

Pros:

  • Battle-tested. Twenty-five-plus years of real-world use, and no one has found a way to meaningfully accelerate it on common hardware that the legitimate user doesn't also see.
  • Available everywhere. Every major language has a stable, well-audited library.
  • One parameter to tune. You don't have to think about memory budgets or parallelism. Just turn the cost dial until verification takes around 250 milliseconds on your servers.

Cons:

  • It's only CPU-hard, not memory-hard. A modern GPU can run many bcrypt instances in parallel because each one needs only a small amount of memory. It's still slow per attempt, but attackers can run a lot of attempts at once.
  • It has a hard input limit of 72 bytes. Anything longer is silently truncated. That's a footgun if your application allows long passwords or passphrases.
  • Some implementations have null-byte truncation quirks, especially when developers pre-hash passwords with another function to work around the 72-byte limit. Doing that without proper encoding can introduce real vulnerabilities.

OWASP currently positions bcrypt as a fallback option for legacy systems where Argon2 and scrypt aren't available, with a recommended minimum work factor of 10, and ideally 12 or higher on modern hardware.

scrypt: The memory-hard answer

Colin Percival introduced scrypt in 2009, originally for the Tarsnap online backup service, and it was the first widely deployed password hashing function explicitly designed to be memory-hard. The intuition: GPUs and ASICs are good at parallel computation, but they have limited and expensive RAM per core. Force an algorithm to use a lot of memory and you blunt that hardware advantage.

scrypt builds a large array of pseudorandom values, then accesses it in a sequence determined by the input. Because each access depends on previous values, you can't easily parallelize it, and you can't shrink the memory footprint without paying a steep computational penalty (the so-called time-memory tradeoff).

scrypt exposes three parameters:

  • N: the CPU/memory cost factor. Memory usage scales linearly with N.
  • r: the block size, which fine-tunes memory access patterns.
  • p: parallelism, for spreading work across cores.

Pros:

  • Memory-hardness genuinely raises the cost of GPU and ASIC attacks compared to bcrypt.
  • Long deployment history. It's been protecting passwords (and famously, several cryptocurrencies' proof-of-work) since 2009.

Cons:

  • Three parameters means three things to get wrong. The interactions between N, r, and p aren't always intuitive.
  • Memory-hardness is real but not as strong as Argon2's. Cryptographic analysis has shown scrypt's tradeoff resistance has limits that newer designs improve on.
  • It doesn't have separate variants for different threat models (more on that in a moment).

OWASP recommends scrypt with a minimum CPU/memory cost of N = 2^17, block size r = 8, and parallelism p = 1 when Argon2id isn't available.

Argon2: The modern default

In 2013, the cryptographic community ran the Password Hashing Competition (PHC) to find a new standard, partly because both bcrypt and scrypt had known limitations and partly because everyone agreed it was time for a function designed from the ground up for the password-hashing problem. The competition ran for two years and Argon2, designed by Alex Biryukov, Daniel Dinu, and Dmitry Khovratovich, won in 2015. It was later standardized as RFC 9106.

Argon2 comes in three variants, and choosing the right one matters:

  • Argon2d: uses data-dependent memory access, which maximizes resistance to GPU cracking but can leak information through side-channel timing attacks. Good for cryptocurrency proof-of-work, not great for password hashing on shared hardware.
  • Argon2i: uses data-independent memory access, which protects against side-channel attacks but is slightly weaker against time-memory tradeoff attacks.
  • Argon2id: a hybrid that runs Argon2i for the first half and Argon2d for the second half. This is the variant you almost certainly want for password hashing, and it's what every modern recommendation points to.

Argon2 has three tunable parameters:

  • Memory cost (m): how much RAM each hash uses.
  • Time cost (t): the number of iterations over that memory.
  • Parallelism (p): how many threads can work on a single hash.

Pros:

  • Strongest known resistance to GPU and ASIC attacks among the three.
  • Argon2id explicitly addresses both side-channel and time-memory tradeoff threats.
  • Designed in the open, peer-reviewed during the PHC, and now standardized in an RFC.
  • Independent control over memory, time, and parallelism lets you tune to your specific hardware.

Cons:

  • "Newer," though "10+ years old and standardized" is hardly experimental at this point.
  • More parameters means more decisions. Get them wrong and you can either burn through your server's memory or end up with a hash that's faster than you intended.
  • Memory requirements can be a problem for very constrained environments (think embedded devices or memory-tight serverless platforms).

OWASP's current guidance is to use Argon2id as the default, with one of two minimum configurations: m = 19 MiB, t = 2, p = 1, or m = 46 MiB, t = 1, p = 1. Both provide equivalent security through different tradeoffs.

How to choose

In an ideal greenfield project on modern infrastructure, the answer is short: use Argon2id. Pick parameters that take 250 to 500 ms on your verification servers, start with the OWASP minimums, and move up from there as your hardware allows.

In the real world, the decision involves more context:

  • You should reach for Argon2id when you're starting a new project, your platform has a well-maintained Argon2 library (most do: argon2-cffi for Python, argon2 for Node.js, argon2 crate for Rust, libsodium bindings almost everywhere), and you're not subject to a compliance regime that mandates something else.
  • scrypt is a reasonable choice when Argon2 isn't available or trustworthy on your platform but you still want memory-hardness. It's also already the default in some ecosystems (Node.js ships scrypt in its standard library, for instance), which can mean one less dependency.
  • bcrypt is the right call when you're working in a legacy codebase that already uses it, when you have an unusual environment without good Argon2 or scrypt support, or when simplicity matters more than maximum theoretical strength. It's not a wrong answer in 2026; it's just no longer the best one. If you go this route, set the cost factor to at least 12 and enforce a maximum password length at or below 72 bytes.
bcrypt scrypt Argon2id
Year introduced 1999 2009 2015
CPU-hard
Memory-hard ✅ (stronger)
Side-channel resistant n/a (no memory passes) partial ✅ (id variant)
Tunable parameters 1 (cost) 3 (N, r, p) 3 (m, t, p)
Max input length 72 bytes none none
Standardized de facto de facto RFC 9106
OWASP positioning Legacy fallback Strong alternative Recommended default

If you have a FIPS-140 requirement, none of these are your primary option. You'll need PBKDF2 with HMAC-SHA-256 and an iteration count of at least 600,000. PBKDF2 lacks memory-hardness, but it's the algorithm that has the certifications.

Things that bite people regardless of which you pick

A few practical points that apply across all three:

  • Don't roll your own. Use a library that handles salt generation, parameter encoding, and constant-time comparison for you. The standard hash strings ($2b$... for bcrypt, $argon2id$... for Argon2) embed the parameters alongside the hash so you can change them later without breaking existing users.
  • Plan for parameter migration. When the next OWASP update raises minimum recommendations, you'll want to re-hash users on their next successful login (you have their plaintext password at that moment, briefly). Storing the algorithm and parameters in the hash itself makes this straightforward.
  • Consider a pepper for additional defense in depth. A pepper is a secret value applied to passwords before hashing, stored separately from the database (in an HSM or secrets manager). If your database leaks but the pepper doesn't, attackers can't crack the hashes at all. A pepper isn't a substitute for a strong hash function; it's an extra layer.
  • Be careful with pre-hashing to work around bcrypt's 72-byte limit. If you pre-hash with SHA-256 to allow long passwords, encode the result as Base64 or hex before passing it to bcrypt. Raw binary output can contain null bytes, and bcrypt's truncation behavior on null bytes has caused real-world issues (look up "password shucking" for the gory details).
  • Tune for your hardware, not for someone else's blog post. A configuration that takes 300 ms on a beefy server might take 3 seconds on a cheap container. Run a benchmark in your actual environment and pick parameters that put a single hash verification in the 250 to 500 ms range: slow enough to hurt attackers, fast enough that legitimate users won't notice.

The short version

Argon2id is the strongest of the three by current cryptographic understanding, and it's what you should use unless something specific stops you. scrypt is a solid alternative when Argon2 isn't on the table. bcrypt is the safe legacy choice, still secure when properly tuned, but with limitations that newer algorithms have left behind.

Whichever you pick, the bigger wins come from the surrounding practices: unique salts (handled for you), generous cost parameters (your responsibility), a migration path for when those parameters need to grow, and never, ever rolling the cryptography yourself.

Or, leave it to us

Password hashing is one small piece of a much bigger authentication stack: SSO, SCIM provisioning, directory sync, audit logs, MFA, session management. WorkOS handles the parts of auth that aren't your product, so you can focus on the parts that are.

Create a free WorkOS account →

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.