It is no secret that account takeover has reinvented itself several times in the last decade to become more effectively automated while harder to differentiate from real users. Today, we are witnessing the ultimate twist: a backward evolution to once again use humans as part of automated attacks.
Just as financial security managers become more innovative to protect their businesses, so do fraudsters. The NuData-sponsored Aite report takes a peek at how account takeover (ATO) has evolved into a combination of automation and human work, making it more costly, but potentially more successful.
Not that long ago, account takeovers were either deployed manually or they simply used scripts to test a handful of username and password combinations. ATO attack patterns and their negative impacts were limited in scope because the information required to commit the crime was expensive, and the resources were challenging to orchestrate relative to payment-instrument fraud, such as card and check fraud.
But as more user information became available – from breaches and other system vulnerabilities – and security tools got better at stopping clunky scripts, account takeover attacks evolved to cast wider nets of victims and fool security tools.
Another factor that Aite analyst Trace Fooshee highlights in the Aite report, Trends in Account Takeover Fraud for 2020 and Beyond, is the move to EMV. These PIN-secured cards for physical purchases made fraudulent transactions costlier for bad actors. As a result, fraudsters started to move to the then-emerging online eCommerce space.
Later, the rapid move to digital payments and person-to-person payments (P2P) influenced the rise of account takeover online. These platforms are convenient for users to manage payments but, when not thoroughly secured, can attract bad actors looking for new entry points for their attacks.
The path to industrialized ATO attacks
In the report, Fooshée writes about a period of time when “banks began to see significant increases in online fraud claimants reporting that sizeable portions of their deposit accounts had been lost to unauthorized ACH payments. The attackers were using conventional ATO tactics that were similar to garden-variety online fraud attack patterns except in one important regard: the scale of attack volume.”
The “garden variety” refers to the standard process of accumulating stolen identities, targeting the victim’s credentials, gaining access to the account and overcoming the bank’s security tools. Of course, the fraudulently acquired money would have to go somewhere, and recipient banks were often a problem as they were easily traceable to the account creator.
On the victim side, the fraudsters would need the credentials and an automated means of first testing them to find the correct ones. This process today is automated and can be executed by anyone, simply purchasing a testing software like Sentry MBA.
Sentry MBA dashboard
There is little doubt that the role of automation tools such as Sentry MBA has made a profound impact on fraudsters’ abilities to scale up their attacks on victims’ accounts. They are also able to open large-scale inventories of mule accounts used to support the movement of unauthorized payments, including P2P, conventional payments such as ACH, and even wire.
Human-looking ATO attacks
The latest enhancement to bot technology is using sophisticated scripts that attempt to emulate human behavior, such as pretending to type information into fields rather than pasting entire fields in rapid succession.
These sophisticated attacks were trending up in 2019. In fact, in the first half of 2020, 96% of attacks against a financial institution in the NuData network was sophisticated (human-looking). This behavior can fool basic bot-detection solutions, stressing the need for FIs to re-examine their online user verification processes.
Regression: Humans replacing bots?
At NuData, the ultimate twist we see is the regression to manual work when deploying ATO attacks. We are seeing more attacks that combine scripts with human work to increase the success rate – or they attempt to. This brings attacks back to a higher price point (paying workers is more expensive than using a script) but can increase success rates as well.
The report cites an example from NuData; this is one of the many times our machine learning models flagged several subtle, anomalous online behaviors. The behavior of a large group of events was at first tagged as high risk by some input anomalies (how the data is input into a web form, whether typing, pasting…) and marked as bot activity.
Emergence of behavioral features related to input anomalies
This traffic was revealed thanks to various telltales signs, including the velocity of the attempts, the failure rate of attempts (rate of login attempts with wrong credentials), the language settings of the device, and the geolocation of the IP address of the device. Combined, these signals created a user profile that was inconsistent with legitimate users for the accounts targeted. NuData triggered a bot challenge to mitigate the attack.
This is where it gets interesting. A surprisingly high proportion of those bot challenges were solved by the supposed bot. Although there is software available to solve challenges such as CAPTCHAs, this didn’t look like a CAPTCHA solved by software.
Once the CAPTCHA was presented, the attack script was redirected to request a human to solve it. Despite the effort of these scripts to bypass the security tools, this back and forth with scripts and humans expose patterns that are far from what a normal user would do. NuData behavioral security tools detected these threats before it was too late.
The implications of sophisticated ATO
Another problem with these sophisticated, human-looking ATO attacks is the scale. Some banks have been struck with attacks 10 times larger than they’d previously experienced. As a result, 18 of the top 40 U.S. banks the Aite Group surveyed were in the process of deploying “transformative initiatives” to supplement current digital fraud controls. They reported that 26% of the group were implementing behavioral biometric solutions.
We can’t forget the traumatic effects fraud has on the public. Many consumers consider transactional credit fraud as a regular risk, holding merchants responsible. They are much less forgiving when their bank accounts are breached. Smaller banks may take up to six weeks to solve account-level fraud, often costing them a valuable customer relationship and overall damage to their brand.
How to protect your FI from evolving ATO
Just as a password was mandatory some decades ago, today, behavioral tools are a requirement to protect accounts from the highly-evolved account takeover attacks we’ve covered in this article.
These tools don’t replace device-based or bot-detection tools; in fact, they can complement them by sharing information with each other to make each tool more effective. The latest ATO trends discussed in this article and the report share one common weakness: they are still machines behaving like machines. Behavioral tools are trained to find anomalies when what seems to be a human is not quite so.
Another way to increase security is by educating your customers about good password and username hygiene. While this seems very simple, Fooshée writes that banks often focus more on revenue-generating conversations and don’t consider the impact of “the termination of thousands of client relationships in the span of just a few weeks… This recent trend has served as a wake-up call to some and a reminder to many of the compelling return on the relatively small investment of proactively training clients on good security hygiene.” Yet less than half of the bankers he surveyed have a formal program to communicate with their customers about security.
As automation grows so will FIs’ needs for effective technology. Learn more about ATO and how to mitigate losses by downloading Trends in Account Takeover Fraud for 2020 and Beyond.