It’s getting harder and harder to tell humans and robots apart — at least when it comes to cyberattacks. Over the past few years, we’ve seen a rise in the number of attacks that imitate the behavior of normal human users. Compared to run-of-the-mill cybercrime tactics, these sophisticated attacks are much harder to detect.
Some sophisticated automated scripts disguise themselves by emulating human keystroke patterns, mouse movements and other behavioral indicators. Other attackers go even further, hiring actual human workers to perform tasks like solving CAPTCHAs or other bot-challenges. Often, sophisticated attacks probe their targets’ defenses to gather information, then evolve over time to take advantage of the vulnerabilities they find. All of this makes mitigation of these types of cyber threats even more difficult for companies.
Hard-to-detect sophisticated attacks now represent the majority of incidents in several key industries. In 2020, an average of 57% of attacks on retail companies and 64% of attacks on financial services companies were sophisticated.
Mitigating sophisticated attacks is possible, but it requires sophisticated defenses. You must take a multi-layered approach with behavioral biometrics to detect attackers probing your defenses, flag human-like scripts, recognize risky human behavior and evolve along with the threats you’re facing. Here’s how NuData clients do it, broken into threat type.
1. How to detect bots that don’t act like bots
Standard bot-detection tools are usually designed to detect obvious bot behaviors. For example, a normal human user isn’t physically capable of making a new login attempt every few milliseconds. Even if they were, they’d be unlikely to try hundreds of different credentials from the same IP address or device. By contrast, human-like scripts emulate the ways real human users interact with their computers. By avoiding telltale signs of bot behavior, they often escape detection.
Detecting human-like scripts requires a multi-pronged approach that uses behavioral biometric technology. This lets you catch inconsistencies and subtle user patterns that more basic security tools miss. For example, the individual attributes of a sophisticated automated script might seem human-like. But if you take a closer look at how those attributes fit together, you can see they don’t make sense. The data might show the user is:
- Typing on a physical keyboard
- Accessing your website on a smartphone
How many iPhones you’ve seen attached to a physical keyboard? Exactly. That’s not a real human – it’s a Frankenstein-like artificial construction designed to look human. But you’ll only catch these inconsistencies if you look closely with behavioral tools.
Case study: Finding the missing 10%
The good news is that with the right tools, it’s possible to detect and mitigate human-like scripts. A top international bank in NuData’s network was struggling with daily account takeover (ATO) attacks that bypassed its basic bot detection solution. Only 0.5-1.5% of the attacker’s credentials used per login attempt were correct, but with the company handling more than 500 million monthly logins, it was enough to affect the bottom line.
To identify these human-like scripts, NuData looked at multiple data points including location, IP address, behavioral signals and passive biometrics, such as the way a user holds their device. In addition to examining individual behavior, we also monitored overall traffic changes in real time. And we compared all of the data points we were collecting to past events recorded by the NuData Trust Consortium.
By analyzing this data, NuData discovered that the bank’s existing automation detection tool was missing 10% of all fraudulent traffic, letting millions of daily fraudulent events go through. Over three months, NuData took care of that missed traffic, mitigating 1.08 billion malicious login attempts that targeted 48 million accounts. The effort enjoyed a 99% success rate, taking the pressure off the bank’s bottom line.
2. Stopping human-driven attacks
Human-like scripts are hard enough to detect. But attackers are quickly adopting an even sneakier tactic: calling in actual humans to help. For a small fee, cybercriminals can hire human farms to perform simple online tasks like solving CAPTCHAs and other bot challenges at scale.
This type of attack is especially popular with attackers targeting high-value accounts holding payment information, loyalty points or other lucrative data that can convert to money. In summer 2020, NuData observed a 350% year-over-year spike in human-driven attacks in financial services.
Fortunately, as is often the case with fraudulent traffic, human-farm traffic behaves differently than a normal user and there are telltale signs. For example, because these workers fill out the same forms hundreds of times a day, their mouse movements can show they’re more familiar with the form layout than the average user. When inputting personal data like names and addresses, they tend to cut and paste rather than using typed input, like a person familiar with the information would do. These signals are subtle, but they make human-driven attacks possible to spot.
Signs of human farm behavior:
Case study: How to detect human farms
Human-driven attacks can be particularly powerful when deployed in combination with other attack strategies. A NuData client was targeted with just such a combination of script and human work, which we call a hybrid attack. The hybrid attack targeted both login and wallet functionalities. This attack was particularly tricky because the automated script was designed to adapt to the result, helping the attacker refine their strategy.
Each time a login attempt failed, the script logged whether the failure was due to incorrect credentials or a technical issue, such as being caught by a bot mitigation tool. The attacker would then reuse credentials that had been rejected due to a technical issue, increasing the credential success rate over time.
The process of a sophisticated attack at login and checkout
Realizing that this login traffic was automated, NuData intercepted it with a bot challenge. The script submitted the challenge to a human farm through an online CAPTCHA-solving service — making this attack a hybrid of human and automated strategies.
This hybrid tactic was sneaky — but not quite sneaky enough. By looking at factors like mouse movement, typing cadence, velocity of form submissions and more, NuData identified which challenges were solved by human workers. We were able to mitigate 99.9% of the attacks overall.
Alright, this is getting long, but we have more for you to dive into. Continue reading more mitigated threats by attack trends here. Next up: Spotting probing attacks.
More recent threat trends in our bi-annual report, Fraud Risk at a Glance.