Failing to know-your-customers is expensive. The push for banks to develop stronger know-your-customer (KYC) procedures continues from authorities, and they are prepared to hit banks hard in the pocket. By way of example, Deutsche Bank’s $630 million fine for failing to prevent $10bn of Russian money laundering shows just how crippling the punishments can be. On top of the threat of fines, banks face the direct losses from financial crime: Bangladesh Bank took an $81 million hit in 2016 when fraudulent transactions were made via the Swift network.
Firms are looking to new technology to strengthen their defences and avoid being hit with hefty fines. Machine Learning (ML) is featuring heavily in this push — AI algorithms can spot data anomalies and flag them for human attention.
However, these algorithms are not as intelligent as their proponents often claim. One financial crime head cited a fraud algorithm which flagged all transactions in excess of £10,000 for further investigation by humans. Predictably, upon further review, it was found that there were a large number of transactions at £9,999 but the algorithm wasn’t smart enough to work out this unintended consequence.
Humans continue to outsmart the machines, at least for now…
Simple rules which don’t take into account the behavior of the participants will always fail to fully depict risk. It remains impossible to introduce a policy into a complex adaptive system without altering the incentives and behavior of the players in that system. The cops and robbers remain locked in an arms-race.
This is where Machine Learning reaches its limitations — it is rooted in analysis of the past — it can say nothing about changes induced in the future.
Simulation modeling, and in particular agent-based modeling, is the dominant modeling paradigm for studying complex adaptive systems. These models explicitly account for the behavior of the individuals that comprise the system. These agents can adapt to changes in the system — taking on new behaviours as the environment they live in alters.
Imagine a very simple agent-based model with two types of agent:
We could codify into the agents rules for their behavior based on data observed from the field. This could involve creating different types of agents pursuing a range of fraudulent strategies observed in the real world. We can then simulate millions of examples of agents attempting to undertake fraudulent activities to test our fraud detection methods to destruction. Are there certain strategies against which our polices are weak? Are there examples of agents circumventing detection strategies simply? How many agents are placing transactions of £9,999 rather than £10,000?
The synthetic data these simulations generate can be used to develop better fraud detection strategies than simply relying on what has been caught and observed in the past. Beyond being a forward looking approach, there is also no need to rely on sensitive data.
More futuristically, agent-based simulation techniques could even be enhanced by having intelligent agents that are ‘rewarded’ for successfully committing fraud, and ‘punished’ for being caught. This is a neat application of Deep Reinforcement Learning, which is currently one of the most promising techniques in the field of Artificial Intelligence. These agents will quickly learn in the simulated environment to design new fraudulent strategies — perhaps ones that no human has yet come up with — putting banks on the front foot in the fight against financial crime.
Giving banks the ability to outsmart their own detection strategies would give them a head-start on the fraudsters, and perhaps even let them win the arms-race once and for all.