The Human Side of Fraud

In the early days of online fraud, a typical scenario was one bad actor trying to take advantage of the system with fraudulent transactions attempted one after the other until detection. As soon as one scam stopped working they would change targets, repeating the pattern until they were finally forced to change tactics.

While some scammers still stick to individual ploys that trigger chargebacks or use stolen credit cards, others went to the next level by automating the practice of fraud. Why try one account when you can try hundreds? Automation offered new benefits, but also created new strategies. Password cracking programs were used to break into less secure accounts while others adopted brute force measures meant to overloaded systems with too many requests to process, exposing holes.

In response, fraud prevention and security teams created tighter rules to crack down on attempts based on whatever tactic hackers used last. The result? A byzantine system of rules that needed more human oversight to manage. One trick in particular was to create accounts en masse to be used later once they passed artificial benchmarks for trust, such as creating an account and letting it sit untouched for six months before trying. The rule worked, until it didn’t.

Companies have been forced to take the next step in security technology to detect these automated attacks – and some are succeeding. What’s happened is that it’s forced hackers to evolve again. If one-by-one transactions are too slow and automated attempts are easier to detect than ever before, hackers had to find a sweet spot between them both. How did you get volume without using automation?

Enter the human (or click) farms.

Instead of a lone hacker or a machine that controls thousands of bots, a human farm is literally what it sounds like – teams of real people sitting down and doing all of the things a human being would. This can be everything from clicking on web ads to drive false revenue, creating fake social media accounts used to boost follower counts, to gold farmers that collect digital wealth for resale. But they are also used to create accounts that appear to be legitimate.

Often called low velocity attacks, these are harder to pinpoint even with robust systems in place. These coordinated teams act human enough to pass complicated rules systems and don’t operate at the speeds that would trigger an automated attack alert. Those fake accounts may be used right away, or allowed to linger until the time is right. And it works.

But like any job, these human operators aren’t creating the account for personal use and there are unmistakable tells left behind in the creation process – tells only User Behavior Analytics can tease out as something suspicious.

User Behavior Analytics not only protects from account creation through checkout, but also reveal fraud tactics that until recently often went undetected due to their small scale and slower speed. Check out this article, Unspoofable Dimensions to the Identity Process, to find out more about these kinds of attacks and how NuData offers better and adaptive fraud protection.