Blog

Best practices for CLI authentication: a technical guide

Learn how to securely authenticate users accessing your service through a command-line tool, enabling safe, scriptable workflows across terminals, machines, and Docker containers.


Your company provides a service, and your customers want to access it via a command-line tool so they can freely script and automate their desired workflows.

How do you securely authenticate a user running commands across multiple terminals, machines, and Docker containers?

The right solution depends heavily on context—what's sufficient for a developer's workflow could be a critical security gap in an enterprise setting.

This guide explores common CLI authentication patterns used by tools like GitHub CLI and AWS CLI, focusing on how to:

  • Securely obtain credentials (e.g., access tokens) from an API or Authorization server
  • Manage and store these credentials for seamless reuse across requests

We'll also cover best practices for avoiding common vulnerabilities and highlight strategies for balancing security and usability across containerized environments, CI/CD pipelines, and local development setups.

Common CLI authentication patterns

1. API Keys and Credentials Files

The most straightforward approach is having users hardcode an API key or credentials in a configuration file.Real-world examples:

  1. Stripe's CLI uses API keys stored in ~/.stripe/config.toml
  2. AWS CLI uses a credentials file located at ~/.aws/credentials which looks like this:

[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

[production]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE2
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY2
role_arn = arn:aws:iam::123456789012:role/ProductionRole

The command line tool reads the secret from the local configuration file at runtime. It presents it as an authentication token, often a bearer token in an HTTP request header to the CLI's backend API:

The same thing that makes this approach appealing from a usability perspective—the token is just sitting in a flat file in plaintext—makes it problematic from a security perspective.

Companies use this pattern for individual developer machines in practice, but it's unsuitable for team environments without additional security measures, such as encrypting secret-containing files.

Enhancing security with temporary tokens

It's best to exchange long-lived security credentials at runtime for temporary ones.

AWS's Security Token Service (STS) illustrates this pattern well - it exchanges long-lived credentials for temporary tokens that expire in one hour.

While tools like aws-vault demonstrate this approach effectively by managing temporary token generation, you should implement your own temporary token system tailored to your specific security requirements.

2. Browser-Based OAuth Flow

For OAuth services, CLIs often launch a browser for the authentication flow and then capture an access token through a local callback server that it saves to your filesystem for sending on subsequent requests.

Real-world example: GitHub's CLI (gh) uses this pattern to launch a browser for OAuth authentication.

First, the user initiates the flow by running gh auth login:

After proceeding through the Device Activation confirmation step, the user must supply the same one-time password (OTP):

Once the user enters the correct code, GitHub authenticates their session.

At this point, the local server exposed by the gh command-line tool captures the code and exchanges it for an authentication token, which it writes to the filesystem for use in subsequent requests. The developer has logged in via the CLI.

Sample Browser-based auth implementation

This example code spins up a server that binds to the developer's localhost on port 8000 to listen for the code returned by the OAuth service specified by authUrl:


const http = require('http');
const open = require('open');

async function authenticateWithOAuth() {
  // Start local server to receive OAuth callback
  const server = http.createServer();
  const port = 8000;
  
  return new Promise((resolve, reject) => {
    server.on('request', (req, res) => {
      const urlParams = new URLSearchParams(req.url.split('?')[1]);
      const code = urlParams.get('code');
      
      if (code) {
        res.end('Authentication successful! You can close this window.');
        server.close();
        resolve(code);
      }
    });

    server.listen(port, () => {
      // Open browser to OAuth authorization URL
      const authUrl = `https://api.service.com/oauth/authorize?client_id=xxx&redirect_uri=http://localhost:${port}`;
      open(authUrl);
    });
  });
}

When the OAuth provider successfully authenticates the user, it returns a redirect response to the user's localhost and configured port.

The endpoint on the developer's machine captures this code from the query string parameters and exchanges it with the OAuth provider's token endpoint for a secure access token over HTTPS, which it then stores locally.

Browser-based auth flow diagram

This pattern provides excellent security through standard OAuth flows and works well with modern authentication features like SSO and MFA.

However, its browser dependency makes it unsuitable for headless environments and automation, such as running in CI/CD via GitHub Actions.

It's also more complex to implement, and there are several potential security issues to defend against.

Browser-based vulnerabilities to avoid

Port Binding Issues

When starting a local server for OAuth callbacks, binding to all interfaces (0.0.0.0) exposes your authentication server to other machines on the network.

This could allow attackers to intercept OAuth codes or inject malicious responses. Always bind only to localhost (127.0.0.1) to ensure the callback server is accessible only from the local machine.


// VULNERABLE: Binding to all interfaces!
server.listen(8000, '0.0.0.0');
   
// SECURE: Bind only to localhost
server.listen(8000, '127.0.0.1');

Cross-Site Request Forgery (CSRF) Attacks

Without state verification, attackers could trick users into submitting auth codes from their OAuth flow to your callback server, potentially gaining access to their credentials.

The state parameter acts as a CSRF token, ensuring the callback matches the original request. Generate a secure random state value and verify it in the callback:


// VULNERABLE: No state verification
server.on('request', (req, res) => {
  const code = new URL(req.url, 'http://localhost').searchParams.get('code');
     
// SECURE: Include and verify state parameter
const expectedState = crypto.randomBytes(32).toString('hex');
// Store expectedState securely
server.on('request', (req, res) => {
  const params = new URL(req.url, 'http://localhost').searchParams;
  if (params.get('state') !== expectedState) {
    res.writeHead(400);
    return res.end('Invalid state parameter');
  }

Token Storage

Storing tokens in plaintext files makes them vulnerable to malware, other users on the system, or accidental exposure through backups or file sharing.

System keychains provide encrypted storage with OS-level access controls, significantly reducing the risk of token theft.

3. Device Code Flow

For environments without browsers, the device code flow provides a URL and code that users enter in their browser on another device.


async function authenticateWithDeviceCode() {
  // Request device code
  const response = await fetch('https://api.service.com/device/code', {
    method: 'POST',
    body: JSON.stringify({ client_id: 'xxx' })
  });
  
  const { device_code, verification_url, user_code } = await response.json();
  
  console.log(`Please visit ${verification_url} and enter code: ${user_code}`);
  
  // Poll for token
  while (true) {
    const tokenResponse = await fetch('https://api.service.com/token', {
      method: 'POST',
      body: JSON.stringify({ 
        grant_type: 'device_code',
        device_code 
      })
    });
    
    if (tokenResponse.ok) {
      return tokenResponse.json();
    }
    
    await new Promise(resolve => setTimeout(resolve, 5000));
  }
}

Real-world example: AWS CLI uses device code flow when browser-based auth isn't available.

The device code flow solves the browser accessibility problem while maintaining OAuth security benefits but introduces more user friction through manual code entry and polling.

4. Username/Password with Token Exchange

Some CLIs accept traditional credentials and exchange them for tokens, though this pattern is becoming less common due to security concerns.


async function authenticateWithCredentials(username, password) {
  const response = await fetch('https://api.service.com/token', {
    method: 'POST',
    body: JSON.stringify({ username, password })
  });
  
  if (!response.ok) {
    throw new Error('Authentication failed');
  }
  
  const { token } = await response.json();
  return token;
}

This pattern is familiar to users and simple to implement, but it requires handling sensitive credentials directly and lacks modern security features. Consider this approach a non-starter for any greenfield development projects.

Token Storage and Security

Once authenticated, CLIs need to store tokens securely.

1. System Keychains

System keychains are built-in secure storage systems provided by operating systems (like Keychain on macOS or Credential Manager on Windows) that encrypt and protect sensitive data using hardware security features. They offer a standardized way to store secrets while letting the OS handle encryption and access control.


const keytar = require('keytar');

async function storeToken(token) {
  await keytar.setPassword('myapp', 'default', token);
}

async function getToken() {
  return await keytar.getPassword('myapp', 'default');
}

System keychains are recommended as the primary storage mechanism because they leverage OS-level security features designed explicitly for credential storage.

They provide encryption at rest and secure memory handling without requiring developers to implement these features.

2. Encrypted Configuration Files

Many CLI tools use encrypted files to store sensitive credentials, typically in the user's home directory. Rather than implementing encryption from scratch, most use established libraries or formats:

Common Approaches:

  • GPG encryption (used by git-credential-store)
  • AES encryption with a master key derived from system properties
  • Password-based encryption, where users provide a master password
  • XDG-compliant encrypted credential storage (used by many Linux tools)

Real-world Examples:

  • npm uses per-registry encrypted credentials in ~/.npmrc
  • pip can use encrypted configuration via keyring
  • gradle stores encrypted credentials in ~/.gradle/gradle.properties
  • kubectl stores encrypted cluster credentials in ~/.kube/config

This approach allows for portability across environments and machines because encrypted configuration files can be version-controlled, backed up, and restored.

However, careful key management is required to prevent exposure through memory dumps, and this may involve manually secure deletion and decryption steps.

This approach often adds a layer of user interaction for decryption but provides flexibility in various environments. It can complicate deployments or increase the likelihood your users will encounter confusing errors they need support to get through.

3. Environment Variables


const token = process.env.API_TOKEN;

require('dotenv').config();
const token = process.env.API_TOKEN;

Environment variables are simple to implement and easy to rotate, and they are the standard approach for CI/CD systems.

However, they're also visible in process listings, can be accidentally logged, and do not provide encryption at rest. There are some critical rules for using environment variables securely:

Runtime Injection

  • DO: Inject secrets at runtime through your CI/CD platform's secure mechanisms
  • DON'T: Store sensitive values in CI/CD configuration files
  • DON'T: Echo or print environment variables in build logs

# UNSAFE - Don't store secrets in CI config files
env:
  API_KEY: "sk_live_123456789"  # Exposed in version control

# SAFE - Use CI platform's secret management
env:
  API_KEY: ${{ secrets.API_KEY }}  # Injected at runtime

Secrets management solutions

For production workflows, implement a dedicated secrets management solution:

Cloud Provider Solutions

  • AWS Secrets Manager
  • Google Cloud Secret Manager
  • Azure Key Vault

Self-hosted Options

  • HashiCorp Vault
  • Sealed Secrets for Kubernetes

You can then read your secret out of the secrets manager at runtime while performing your build:


import boto3

def get_secret():
    session = boto3.session.Session()
    client = session.client('secretsmanager')
    
    try:
        response = client.get_secret_value(SecretId='my-api-key')
        return response['SecretString']
    except Exception as e:
        raise Exception("Failed to retrieve secret") from e

Use CI/CD Platform Security Features

Modern CI/CD platforms provide built-in security features for managing sensitive data:

GitHub Actions

  • Repository Secrets
  • Environment Secrets
  • Organization Secrets

GitLab CI

  • CI/CD Variables
  • Protected Variables
  • Group-level Variables

Jenkins

  • Credentials Plugin
  • Secrets Management Integration

The idea is to store your secrets in a dedicated secret store and then reference them as needed in your workflow files.

For example, reading an API key via GitHub Actions secrets:


jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v2
      - name: Deploy
        env:
          API_KEY: ${{ secrets.API_KEY }}
        run: |
          # API_KEY is only available within this step
          ./deploy.sh

Avoid using environment variables for end-user CLI tools because:

  • They require manual user management
  • System monitoring might expose them
  • They lack the security features of modern credential storage systems

CLI authentication in containerized environments

Container environments create specific challenges for CLI authentication. These arise in two common scenarios:

When users run your CLI tool inside a container:


# Running a CLI tool in a container
docker run company-cli deploy --env production

When your CLI needs to authenticate with containerized services:


# Using CLI to interact with Docker daemon
company-cli container list

These scenarios introduce significant constraints on your auth pattern choices:

  • System keychains aren't available inside containers
  • Browser-based OAuth flows won't work in containerized environments
  • Mounted credential files require careful security consideration
  • Environment variables need special handling in containers

Best Practices for Container Authentication

Environment Variables (Recommended for CI/CD)


# Docker run with secure credential passing
docker run \
  -e API_TOKEN=${API_TOKEN} \
  company-cli deploy

When you run this command, Docker creates a container and makes the API_TOKEN value available only to processes inside that container. You pass the token at runtime but never permanently store it in the container image.

This approach is ideal for CI/CD pipelines where the CI platform manages credentials.

Volume-Mounted Credentials (Development)


# Mount credentials read-only from the host
docker run \
  -v ${HOME}/.company/config:/root/.company/config:ro \
  company-cli deploy

This command makes your local credentials available inside the container by mounting your config file from your host machine. The :ro flag makes it read-only, preventing the containerized process from modifying your credentials.

This pattern is helpful during local development when using your existing authentication.

Device Code Flow (Interactive Use)

  • Ideal when browser-based OAuth isn't available
  • Works in containerized environments
  • Requires separate device for auth

The device code flow displays a code you enter in a browser on any device, making it perfect for containers where you can't open a browser directly.

Security considerations for containers

Never Bake Credentials into Container Images

A common but dangerous mistake is embedding credentials directly in your Docker image. Here's an example of what NOT to do:


# DANGEROUS: Don't do this!
FROM node:18
ENV API_KEY=sk_live_123456789
COPY . .
RUN npm install

This pattern is dangerous because:

  1. The API key becomes part of the container image
  2. Anyone who pulls the image can extract the key:

# An attacker can easily extract baked-in credentials
docker history --no-trunc image-name
# or
docker run image-name env | grep API_KEY

  1. You cannot rotate the credential without rebuilding the image
  2. The credential is visible in your Dockerfile and build logs

Instead, always pass credentials at runtime. Define environment variables in your Dockerfile:


# SAFE: Accept credentials at runtime
FROM node:18
COPY . .
RUN npm install
# Define but don't set the environment variable
ENV API_KEY=""

and pass them into the container:


docker run \
  -e API_TOKEN=${API_TOKEN} \
  company-cli deploy

Additional Security Measures

Read-only Credential Mounts: When mounting credential files, always use read-only mode (:ro) to prevent containerized processes from modifying them.

Docker Secrets: For Docker Swarm deployments, use Docker's built-in secrets management:


# Create a secret
  docker secret create api_key api_key.txt
  
  # Use the secret in a service
  docker service create \
    --secret api_key \
    company-cli

Environment Variable Hygiene:Clear sensitive environment variables when possible and avoid printing them in logs:


# After using the credential
  unset API_KEY

Temporary Tokens:Use short-lived tokens when possible, especially in containers.Choosing the Right PatternFor containerized environments:

  • CI/CD pipelines → Environment variables (managed by CI platform) and passed into the container at runtime - NEVER “baked” into the image or hardcoded
  • Local development → Volume-mounted credentials + device code flow
  • Production services → Cloud provider secret management (AWS Secrets Manager, Google Secret Manager, etc.)

CLI auth anti-patterns to Avoid

The most secure CLI auth implementations:

  • Minimize the lifetime of sensitive credentials
  • Provide a smooth user experience
  • Fail securely and gracefully

Here are the common pitfalls to avoid when implementing auth in your command line tool:

  1. Insecure Storage of Long-lived Credentials
  • ❌ Storing API keys in plaintext files
  • ❌ Hardcoding credentials in source code
  • ✅ Instead: Use system keychains or encrypted storage
  1. Unsafe Local Server Implementation
  • ❌ Using a fixed callback port for OAuth
  • ❌ Accepting callbacks without state verification
  • ❌ Running callback server without localhost binding
  • ✅ Instead: Use dynamic ports, verify state parameter, bind to localhost only
  1. Poor Token Management
  • ❌ Sharing tokens across environments
  • ❌ Not implementing token refresh
  • ❌ Using the same token for different permission levels
  • ✅ Instead: Use environment-specific tokens with appropriate scoping
  1. Compromised User Experience
  • ❌ Requiring re-authentication for every command
  • ❌ Mixing authentication methods inconsistently
  • ❌ No clear path to recover from authentication failures
  • ✅ Instead: Implement secure token caching and clear error recovery
  1. Risky Environment Handling
  • ❌ Using browser-based flows in CI/CD
  • ❌ Storing persistent tokens in containers
  • ❌ Sharing credentials across development and production
  • ✅ Instead: Use appropriate auth patterns for each environment

Choosing the right CLI auth pattern

The authentication pattern you choose shapes both your CLI's security posture and user experience. Remember these key guidelines for common scenarios:

  • Developer Tools & SDKs: Browser-based OAuth flows provide the ideal balance of security and usability. They leverage existing auth systems, support SSO/SAML integration, and offer a familiar user experience.
  • CI/CD & Automation: Temporary tokens with strict scoping and automatic rotation minimize risk while enabling automated workflows.
  • Enterprise Deployments: Device code flow combines the security benefits of OAuth with the flexibility to authenticate from any environment, including containers and headless systems.

Getting Started with WorkOS

Ready to implement secure CLI authentication? WorkOS provides battle-tested solutions that handle the complexity of enterprise-grade auth.

Sign up for WorkOS today, and start selling to enterprise customers tomorrow.

In this article

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.