In this article
April 4, 2025
April 4, 2025

Securing your app with Risk-Based Authentication and AI

Learn how Risk-Based Authentication (RBA) and AI can transform your app’s security, with best practices and insights to protect against evolving threats.

Let’s face it: not all actions in your app are created equal. Logging in from your couch at 9 AM is probably no big deal, but trying to transfer a massive amount of cash from a new device at 2 AM? Yeah, that’s a red flag. And that’s exactly where Risk-Based Authentication (RBA) comes in. Instead of treating every action the same, RBA assesses the risk of each login or transaction, weighing things like location, device, and user behavior. Think of it as a smart bouncer at the club—letting in the regulars without a second thought but asking for ID if someone looks out of place.

But here’s the kicker: when you pair RBA with artificial intelligence (AI), it gets even smarter. AI can help RBA learn from past behavior, predict potential threats, and improve security in ways that static rules can’t match.

In this article, we’ll dive into Risk-Based Authentication and see how it can help you level up your app’s security game. We’ll see how it works in practice, share best practices, explore how machine learning keeps things fresh, and show you how this dynamic duo can keep your app—and its users—safe from the bad guys.

What is Risk-Based Authentication?

Risk-based authentication (RBA) is a security measure that evaluates the risk associated with a login attempt or transaction and adjusts the level of authentication accordingly. Instead of applying the same authentication requirements (like entering a password) for every action, RBA considers the context of the situation.

It might take into consideration things like:

  • The user's location (are they logging in from a new country or device?)
  • The device being used (is it a known, trusted device?)
  • The time of the login (is the login attempt happening at an unusual time?)
  • The type of action (is the user attempting to make a large financial transaction?)
  • The user’s behavior patterns (are they acting in line with their usual activity?)

If the system detects low risk, the user can continue with minimal friction—typically, just entering a password or PIN.

If the risk is deemed higher, the system may require additional authentication steps, like multifactor authentication (MFA), a biometric scan, or a verification code.

This adaptive approach makes it easier for trusted users to access their accounts while increasing security for potentially risky actions.

RBA in practice: Banking platforms

Online banking platforms are known to use RBA. Checking your balance is not as sensitive an action as changing how much money you can transfer in a single day. For example, the banking app I’m using always asks for biometrics to let me access even the simplest information, but if I want to change my daily transfer limits, I have to verify my identity using more factors, like codes sent to my mobile and email address.

Banks use multiple risk factors to determine whether to require additional authentication for specific actions:

  • User behavior: The system continuously analyzes user behavior over time (frequency of logins, transaction types, amounts, etc.). If a user typically makes small transfers but suddenly tries to transfer a large sum, the system considers this a higher risk.
  • Device fingerprinting: If the user logs in from a new device or one that’s not previously registered or recognized (based on a device fingerprint), this introduces additional risk. The system can then either request additional authentication, such as one-time passcodes (OTP) sent via SMS or email, or initiate biometric authentication (fingerprint/face scan) if supported by the user’s device. Other times, some systems might prefer to send an email informing the user of the action. We have all received emails from Google saying that a new device was used to access our account whenever we use a new phone or laptop.
  • Geolocation and IP address: The system evaluates the geolocation and IP address from which the user is logging in. If the login attempt comes from a different country or high-risk location, the system flags the action as potentially suspicious.
  • Time of day: The system also checks the time of day for login or transaction requests. If the user typically logs in during business hours (9 AM to 5 PM), and suddenly tries to make a transaction at 2 AM, this could be considered unusual.
  • Transaction context: The system not only analyzes the usual user behavior but also the transaction context—specifically the transaction amount and the recipient's details. If the user attempts to send a large amount to a new recipient, or to an account that is not typically used by the user (e.g., a different bank), the system could flag this as high-risk. The system may also integrate machine learning models that look for patterns in the type of transactions, using historical data to predict whether the current transaction is likely to be fraudulent.

After evaluating the risk factors (unusual device, unfamiliar location, time-of-day, large transaction, etc.), the system will categorize risk into three levels: Low, Medium, and High. Based on the level of risk, the following actions will be taken:

  • Low Risk (e.g., standard login from a known device and location, small transaction)
    • Grant access with minimal authentication (password only).
  • Medium Risk (e.g., login from a new device, unusual time of access, or transaction to a new recipient)
    • Require multifactor authentication (SMS or Email OTP, biometrics, push notifications, etc.).
    • Proceed with the requested action if the additional authentication is successful.
  • High Risk (e.g., large transaction, login from a foreign country, or multiple failed login attempts)
    • Deny access or escalate to manual review.

If a user fails to pass authentication after a certain number of challenges (e.g., 3 failed MFA attempts), the system might take the following actions:

  • Temporarily lock the user account and send an email notification with instructions for account recovery.
  • Send an alert to the user about suspicious activity, including a notification via email and/or SMS.
  • Escalate the case to a security team for further investigation, especially if the high-risk action involves financial transactions.

Using machine learning for RBA

Adding machine learning (ML) to a Risk-Based Authentication (RBA) system makes it better at accurately assessing risk and adapting as time goes on. The ML models learn from past data, spot patterns, and help predict potential threats or behaviors. As the system collects more data, the models get smarter, improving risk evaluations, reducing false positives and negatives, and making authentication decisions more accurate.

Here’s how that works:

  1. First, the system needs to gather various data points that can be used to train machine learning models. These data can include historical patterns of login attempts, transaction history, device information, location information (IP, GPS), transaction information, etc. These features help build a profile of what is "normal" for a user, which forms the basis for risk assessments.
  2. Once data is collected, machine learning models can be trained to recognize patterns of legitimate behavior versus potentially fraudulent or risky behavior. Common ML algorithms used in this context include:
    • Supervised learning: The system can use labeled historical data (e.g., marked as "fraudulent" or "legitimate") to train models. Popular algorithms include decision trees, random forests, logistic regression, and neural networks. These models learn to predict risk based on features like login location, device type, transaction size, etc.
    • Unsupervised learning: In the absence of labeled data, unsupervised models can identify unusual or anomalous patterns by comparing current data to "normal" behavior. Techniques like clustering (e.g., K-means, DBSCAN) or anomaly detection (e.g., Isolation Forest, autoencoders) are used to flag outliers that deviate from the user's typical behavior.
    • Reinforcement learning: This approach allows the system to improve its decision-making over time by receiving feedback on its actions. The model learns to adapt to changes in user behavior by adjusting its risk thresholds based on past experiences.
  3. Once trained, ML models are integrated into the RBA system for real-time decision-making. When a user logs in or performs a transaction, the system compares real-time data (e.g., location, device, time) with the user’s historical profile. If the action deviates from their usual pattern (e.g., a large transfer from an unfamiliar device), the model flags it as risky and triggers additional authentication, like MFA,.
  4. Machine learning models in an RBA system are not static. They continuously improve by learning from new data and feedback. The feedback loop works as follows:
    • As new data comes in (e.g., new transactions, logins, devices), the system can periodically retrain its machine learning models to adapt to changes in user behavior or emerging fraud trends. For instance, if a legitimate user starts regularly using new devices, the model can adjust its risk scoring to accommodate this behavior.
    • The system continuously monitors its performance by comparing actual outcomes (e.g., whether fraud was detected or prevented) with predicted outcomes. If the model incorrectly flags legitimate transactions as high-risk (false positives) or fails to catch fraudulent behavior (false negatives), adjustments can be made to the model to improve accuracy.
    • If a user is challenged with extra authentication steps (e.g., multifactor authentication) but successfully completes them, that information becomes part of the feedback loop. The system learns that the user is legitimate, even in unusual circumstances, and adjusts its risk assessment model to reflect this.
  5. While machine learning models can significantly enhance the RBA system’s effectiveness, ensuring transparency and explainability of model decisions is crucial, especially in industries like banking, healthcare, or e-commerce, where regulatory compliance is important. Some machine learning techniques—such as decision trees or certain forms of explainable AI—can provide clear insights into why a particular transaction or login was flagged as risky. This helps security teams understand and refine model behavior while maintaining user trust.

Avoid algorithmic bias

Algorithmic bias occurs when an algorithm produces unfair results, often due to biased data or design. For example, an RBA system trained mostly on one demographic might unfairly flag users from other groups as high-risk. Tackling this bias is crucial for fair and accurate security.

Here are some strategies that can help with that:

  • The first step in combating algorithmic bias is to ensure training data includes diverse user behaviors, demographics, and contexts to prevent biased patterns.
  • If certain groups or behaviors are underrepresented, consider oversampling these groups during training or segmenting the data to create a more balanced dataset. For example, if younger users are underrepresented, increasing their presence in the dataset ensures that the model considers their behavior when assessing risk.
  • Conduct frequent bias audits and assess system performance across different user groups. This could involve measuring whether certain geographic locations, devices, or user profiles are being flagged more often than others.
  • Make the risk assessment process transparent, allowing users and admins to see how decisions are made. Use explainable AI techniques that allow you to understand the factors driving risk assessments. For example, if an AI model flags an action as risky, users and administrators should be able to view the factors contributing to the risk score (such as IP address, device type, login history, etc.). This transparency helps avoid situations where the system operates as a "black box" and can lead to better oversight. Allow users to challenge decisions and provide transparency on why they were flagged as high-risk.
  • Algorithmic bias often stems from the perspectives of the developers who create the models. A more diverse development team—comprising individuals from different backgrounds, cultures, and perspectives—can help identify potential biases that others may overlook. When diverse teams are involved, they are more likely to recognize when an algorithm might unfairly target certain groups.
  • Limit the use of sensitive or potentially discriminatory data (e.g., race, gender) unless absolutely necessary.

AI agents and RBA

In traditional RBA systems, risk assessments are based on human users' behaviors—factors like login patterns, geographic location, device, and interactions with the app or website. However, when AI agents (e.g., bots, or automated systems) interact with the system, they can introduce unique challenges that might confuse or mislead an RBA system.

AI agents can often mimic human behavior with a high degree of sophistication or, in some cases, can engage in actions that are entirely different from typical human patterns. This makes detecting them tricky, and if not properly handled, it could lead to both false positives and false negatives:

  • False positives: A bot performing a legitimate action might be flagged as "high-risk" due to the frequency of its actions or the volume of transactions in a short period. For example, an AI agent that processes bulk orders for a business might trigger red flags in an ecommerce platform's RBA system if it interacts too quickly or at an unusual time compared to typical users.
  • False negatives: If AI agents are designed to act similarly to how legitimate users behave, RBA systems might fail to detect risky activities. For example, a bot that makes legitimate-looking purchases in small quantities over time might appear normal to an RBA system, even though it's an automated agent exploiting vulnerabilities in the ecommerce platform.

The first step is to know what you are dealing with, and for that, it’s essential to integrate advanced bot detection tools alongside RBA. Tools that identify behavior anomalies specific to bots (e.g., mouse movement tracking, speed of interactions, lack of human-like delays) can help differentiate human users from AI agents. Once the system knows whether it’s dealing with human users or AI agents, it can adapt its strategy. For example, e-commerce platforms may set up separate risk-based authentication processes for human users and AI agents or bots that are known to perform legitimate tasks. AI agents could be whitelisted for specific actions but closely monitored for behavior patterns that could suggest fraud.

AI agents interacting with the system could have an entirely separate risk assessment process tailored to the nature of the interactions. For instance, if the AI agent is a customer service bot, its activity could be pre-approved for certain low-risk actions but flagged if it deviates from typical operational patterns.

Best practices for RBA

Below are some best practices that can help you deploy and maintain a successful RBA system:

  • Use historical data to understand normal user behavior, including login patterns, transaction types, and device usage. This baseline will serve as a foundation for determining risk levels. Risk thresholds should not be static; they should adapt over time based on user behavior, trends, and emerging threats. This ensures that the system remains flexible and responsive to changing conditions.
  • Effective RBA systems evaluate a combination of factors (e.g., geolocation, device reputation, user activity history, etc.). Relying on just one factor (like location) can lead to false positives or negatives. I had to quit a bank because it kept denying my transactions when they were done from Mexico, even though I lived in the country for over a year. Don’t lose customers over this.
  • All authentication attempts, risk assessments, and actions taken should be logged for security and compliance purposes.
  • Monitor logs for unusual behavior patterns that may indicate an emerging security threat (e.g., large numbers of high-risk events in a short period).
  • If authentication fails or access is denied, always send a detailed message to the user, explaining the reason for the action and providing instructions for resolution (e.g., resetting their password or verifying identity).
  • Review and update your policies at least quarterly to keep up with evolving security threats or user behavior trends.
  • Leverage external threat intelligence data sources, such as IP blacklists, device reputation services, or third-party fraud detection systems. Machine learning models can be trained to incorporate this external data, providing additional context to risk assessments. For example, if the user’s IP address is flagged in a known fraud database, the ML model can weigh this external factor heavily in determining risk.
  • If you are using machine learning, use supervised learning for known threats and unsupervised learning to detect unknown or emerging threats. Over time, these models can improve based on user behavior and the success of previous authentication decisions.
  • Avoid unnecessary authentication steps for low-risk users. If a user logs in from a trusted device and location, use a password or PIN only. This reduces friction and improves the user experience.
  • Regularly inform users about safe practices, like recognizing phishing attempts and setting up MFA. Educating users can reduce the chances of fraud and help them understand why additional steps are being taken for their protection.
  • Continuously track the effectiveness of your RBA system by analyzing false positives (legitimate users flagged as risky) and false negatives (fraudulent activities that go undetected). Adjust policies based on performance data to strike a better balance between security and user experience.
  • Provide users with multiple MFA options (e.g., SMS codes, authentication apps, biometrics) so they can choose the most convenient and secure method for their preferences. Keep in mind, that while SMS-based MFA is popular, it’s vulnerable to SIM swapping and other attacks. Consider supporting more secure alternatives, like app-based authentication (e.g., Google Authenticator) or hardware security keys.
  • Given the sensitive nature of data being processed (such as behavioral data and transaction history), ensure that all personal data is encrypted and stored securely. Comply with data privacy regulations such as GDPR, CCPA, and industry standards (e.g., PCI DSS for financial institutions). Always inform users about the data being collected for RBA purposes and obtain consent where necessary.
  • Regularly test your RBA system with simulated attacks (e.g., account takeover, phishing) to identify potential vulnerabilities. This helps improve detection capabilities and strengthens your RBA policy over time.
  • Run simulations based on typical user behavior and stress-test your RBA system to evaluate its ability to identify risk under various conditions.

Final thoughts

The future of Risk-Based Authentication (RBA) and AI holds a lot of potential, but it’s not all smooth sailing just yet. As threats become more advanced, RBA systems will continue to evolve, offering smarter, more adaptive security measures. However, there are still challenges to address, like the balance between security and user convenience, the risk of algorithmic bias, and the constant need to stay ahead of cybercriminals, who are also getting smarter.

While AI’s ability to learn and adapt could lead to more precise risk assessments, it’s important to remember that AI models aren’t infallible—they can still be tricked, misled, or lack context in certain scenarios. The future of RBA and AI will likely be a continuous cycle of adaptation, requiring careful fine-tuning and constant vigilance.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.