article |
I am A.I.

To Catch a Fraudster

Fraud prevention

There are growing concerns that automation technology destroys jobs. What isn’t discussed as often are cases where humans and robots work well together or even complement each other. Case in point: fraud detection and prevention. 

Virtual crime isn’t new, but it’s growing more pervasive each year. Online fraud attempts rose 22 percent during the 2017 holiday season compared to 2016, according to ACI Worldwide benchmark data. LexisNexis’ “True Cost of Fraud” report shows that every $100 of fraud costs businesses a staggering $240.

Catching fraudsters, is an ongoing game of whack-a-mole, which is why organizations are increasingly turning to both automated and human investigators. When it comes to fighting fraud, bots and humans make a formidable team.  

The first line of defense
Recently, cybercriminals’ attacks have evolved in using big data, distributed networks, and other resources to exploit vulnerabilities. They can mount massive attacks that affect millions of people in seconds. The problem with traditional protection methods is humans are slow to process the massive scale of these attacks and can’t keep up with changing fraud patterns. They also get tired of doing repetitive tasks. 

Technology must be the first line of defense to detect fraud. Fraud detection filters, algorithms, and knowledge management databases power bots that react with agility (think fractions of a second) and flag millions of attacks and suspicious activities. In fact, bots will do most of the preliminary fraud prevention work. 

Many organizations use rules and reputation lists as the first level of fraud detection. For instance, retail companies face the challenge of preventing fraud before the transaction is complete. With millions of transactions taking place daily, bots help evaluate behavior patterns and other data points to validate transactions in real time. 

The company may have a rule that states, “If a customer adds four credit cards to an online shopping account in less than 24 hours, flag the account.” Similarly, reputation lists are a list of specific IP addresses, devices, and other characteristics that are associated with fraudulent activities. 

The human factor
Automated solutions are highly effective at detecting fraudulent activity, but they’re not perfect. Humans are needed for two reasons: 1) A bot could inadvertently flag genuine transactions as suspicious activity, and 2) there are thousands of cases every day that are too complex, new, or unwieldy for bots to accurately handle.

Human intervention is an essential part of fighting fraud without undermining the customer experience. Below are a few scenarios that are still difficult to automate:

  • Cross-platform research: Integrations between an in-house fraud detection tool and a third-party tool.
  • Language: Natural language processing has come a long way, but conversational nuances still trip up bots.
  • Context or making sense of the story: Complex acts of fraud are spread across time, space, and/or platforms. A bot may catch some of the factors, but a human is needed to tie the pieces together. 
  • Complex fraud analysis: It’s easy for a bot to find anomalies, such as names that don’t match, but it is difficult to determine whether the mismatch can be explained (e.g., nicknames of a genuine user) or not.
  • Gray areas: Not enough information is accessible for a bot to decide if something is a fraudulent act.
  • Visual understanding: Advances are being made in image recognition, but a bot may not be able to tell if an image is scraped or doesn’t match the rest of the data.
  • Text parsing: A bot may not be able to recognize text that’s structured in an unfamiliar way. 
  • New or evolving fraud: Variances or new combinations of fraudulent practices that a bot has not been trained to look for could go undetected.  


More (analytical) humans wanted
As automated technology becomes better at flagging suspicious activities, the role of human investigators must also evolve. People with an analytical and creative mindset are needed to analyze unusual and/or complex instances of potential fraud. For example, when a one-way flight to Hawaii that was scheduled to depart in the next hour was purchased in a card-not-present transaction, it was flagged as a sign of unusual card activity. This turned out to be a genuine transaction in which the cardholder gave his son permission to purchase the ticket for a family emergency.

Automatically canceling a suspicious transaction like this would have angered the customer and lowered the customer’s satisfaction rate. Instead, a fraud associate confirmed the purchase was legitimate by contacting the cardholder. At the same time, fraudsters attempt to make similar purchases every day and it’s up to associates to quickly analyze the information that’s available and select the best course of action. 

As repeatable, rules-based tasks are increasingly performed by automated technology, people skills like analysis, critical thinking, and curiosity will be more essential than ever. Bots can free human investigators from performing monotonous security tasks, allowing them to focus on making the types of complicated, nuanced decisions that computers aren’t capable of. 

Now instead of mindlessly checking boxes off a list, fraud investigators ensure that the automated systems are running smoothly in addition to providing human common sense. Moreover, entrusting employees with greater responsibilities also creates more meaningful and fulfilling careers. 

Constant vigilance  
These changes are impossible without the right policies and procedures to support a continuous cycle of prioritizing, analyzing, investigating, and resolving issues. For example, the massive Equifax data breach happened because of a failure to update long-outdated software components, which in turn was caused by weak procedures to act on such updates. Every company should regularly assess its data protection policies and ensure that it has an incident response plan in place. 

Even an advanced fraud prevention system that leverages bots and humans must be regularly monitored and updated. Knowledge management tools and databases need to be continuously enhanced, and fraud training modules should be assessed on a regular basis to ensure investigators stay current about the most recent and relevant fraud prevention tactics.

Recruitment must also evolve to find the right candidates. In fact, there is already a shortage of such candidates. Over the next five years, the number of unfilled cybersecurity jobs will rise to 1.8 million, a 20 percent increase from 2015 estimates, predicts the International Information System Security Certification Consortium.

The shortage of qualified fraud investigators makes it even more critical that companies fully leverage the strengths of both automated technology and humans in detecting and combating fraud. 

Conclusion
There’s no question that bots can perform some jobs better than humans. Automated systems can work faster, longer, and more consistently; identify discrepancies across data sets that a human may not catch, and review incidents around the clock. 

Humans, meanwhile, are far better than robots at adapting to unpredictable or new situations and interpreting nuanced signals. Leaving simpler tasks to robots enables human investigators to use those characteristics to crack potential fraud cases that are beyond a bot’s capabilities.  

And while there continue to be heated discussions about the impact of automated technology on the labor market, the fraud prevention industry’s approach to combining human workers with automated technology offers a template for creating jobs that fit the strengths—rather than the weaknesses—of the people or machines that fill them.