By Deloitte Southeast Asia

Despite substantial investments in detection, prevention, and deterrence capabilities, financial crime remains a trillion-dollar problem and one of the top risks facing the financial services industry (FSI) and society in the world today. Criminals are becoming more sophisticated in their use of technology, identifying and exploiting flaws in financial systems and leveraging emerging technologies like new payment platforms and cryptocurrencies to conduct complex, multilayered transactions that are becoming increasingly difficult to detect and trace.

However, technology does not only impact how the crime takes place – it can be used to fight crime too. Digitisation, which has been accelerated by the impact of Covid-19, is changing the type of financial crime and the way law enforcement and regulators seek to detect it. For example, traditional cash flow metrics and physical document verification controls are becoming increasingly irrelevant to digital transactions. For businesses to see the greatest benefit of technology in the fight against financial crime, they will need to embrace it throughout the customer life cycle in an integrated fashion. However, different organisations are at different stages of technology adoption and have different needs and budgets to carry out such projects, especially since technology solutions often rely heavily on the ability to integrate efficiently with existing systems and require complete and accurate data.

AI/ML Working Together with Traditional Systems

Artificial intelligence (AI) refers to machines that can mimic human cognition and take on tasks that require relatively complex reasoning and decision-making. AI can help to automate business processes, detect patterns in criminal behaviour, generate insights, and engage customers and employees through routine communications. This technology is being used to enhance transaction monitoring (TM) and name screening systems and processes by making them faster and producing more accurate anti-money laundering (AML) data, which allows an organisation to conduct thorough risk assessments.

Machine learning (ML) is a subset of AI that can continually improve a model, allowing the effective capture of subtleties and dynamism around criminal behaviours which are almost impossible to code effectively under a rules-based approach. Through continued exposure to data points, the machine ‘learns’ to grasp patterns in data or tasks beyond its pre-defined coding, therefore facilitating more accurate and predictive analytics from large, complex data sets, making it easier to adapt quickly to new threats and methodologies. ML is particularly relevant to TM, due to its ability to ‘make judgments’ about criminal behaviour, thereby increasing the accuracy of its risk assessments and thus reducing the risk of ‘false positive’ alerts (falsely alerting teams of suspected improper behaviour).

ML algorithms can be taught to detect and recognise suspicious behaviour and assess risks accordingly. For example, machines will learn and focus on ‘bigger’ risks, knowing when to ignore unusual transactions that present no risk as dictated by their records and customer behaviour. The greatest opportunity for AI/ML adoption lies in the monitoring of money laundering and terrorist financing transactions. Traditional systems detect very specific patterns and can be broken. In addition, the results of these traditional models contain more noise than ‘risk signals’ because the network is often spread out so as not to miss potentially suspicious activity. If the rule thresholds are relaxed to capture suspicious transactions close to ‘normal’ activity, there will inevitably be a greater number of alerts that will require expensive manual review to resolve. However, only a very small number of these warnings will result in suspicious behaviour requiring escalation.

ML technology (such as anomaly detection) can be used to identify previously undetected transaction patterns, data anomalies, and suspicious person-entity relationships. This type of ML technology no longer requires static rules but is based on known and trending patterns or threats, making it more difficult for criminals to hide in the banking environment.

ML models, combined with the output of existing systems, can be trained to identify characteristics or indicators of behaviour, highlighting when an activity is truly suspicious. ML technology (such as anomaly detection) can be used to identify previously undetected transaction patterns, data anomalies, and suspicious person-entity relationships. This type of ML technology no longer requires static rules but is based on known and trending patterns or threats, making it more difficult for criminals to hide in the banking environment.

ML can be applied to name screening where systems are required to screen customer names against global lists of known criminals, and blacklisted and sanctioned organisations and individuals. The challenge for many financial institutions (FIs) is to strike a balance between ‘fuzziness’ and accuracy. In other words, current text matching algorithms are not an effective tool to track potential data capture nuances, such as the order of names, titles, salutations, abbreviations, name variants, and common misspellings. In addition, the task is complicated further when dealing with common names where it is difficult to pinpoint the exact individual. The prevailing rules-based approach is both onerous and manual, resulting in increased workload for compliance, as well as potential gaps in surveillance and monitoring. Applying ML to improve match criteria and predict the likelihood of name matching can significantly increase efficiency when it comes to identifying hidden links (parsed from available links) or relationships. Enriching the data with more contextual information about the entity (such as demographic, network and behavioural data) can significantly improve the accuracy of the matching process. Other areas of ML that are becoming more prominent include fraud detection, automated reporting, and enhanced surveillance, including monitoring of voice, video, text, and pattern transactions.

Mitigating the Risks

Although AI and ML models benefit FIs in various ways, we must not overlook the potential risks associated with them. Firstly, the input data in ML models are vulnerable to risks such as biases in the data used for training. This particular data comprises of incomplete, outdated, or irrelevant data; insufficiently large and diverse sample size; inappropriate data collection techniques; and a mismatch between the data used for training the algorithm and the actual input data during operations. Furthermore, the algorithm design is susceptible to risks, such as biased logic, flawed assumptions or judgments, inappropriate modelling techniques, coding errors, and identifying spurious patterns in the training data. Finally, output decisions are vulnerable to risks, such as incorrect interpretation of the output, inappropriate use of the output, and disregard of the underlying assumptions. All of the above risks can be mitigated if the right approach for operationalising and documenting the AI process, with a particular focus on deployment into production and an in-depth understanding of the models and algorithms used, is adopted.

There are a few underlying factors that contribute to these risks, including the cognitive biases of model developers or users, which can result in a flawed output. In addition, the lack of governance and misalignment between the organisation’s values and individual employee behaviour can yield unfavourable outcomes. The lack of technical rigour or conceptual soundness when it comes to algorithm development, training, testing, or verification may lead to an incorrect output, which in turn leads to an unreliable algorithm. Despite the reliability of vendors, flaws in the implementation of an algorithm, its integration with operations, or its use by end users can lead to inappropriate decision-making. Lastly, internal or external threat actors can gain access to input data, algorithm design, or its output, and manipulate them to introduce deliberately flawed outcomes. Hence, to effectively manage the risks of groundbreaking technology such as ML, FIs will need to establish a solid framework; to restructure and to modernise traditional risk management framework capabilities.

Benefits Outweigh the Cost

A potential deterrent for FIs when it comes to implementing AI/ML models is the cost. It is common to overlook the cost of these revolutionary ML models as we are so enamoured by its capabilities, and in reality, vendors can charge up to millions to implement an AI/ML model into the existing AML technologies. However, the benefits typically outweigh the cost in the long run. Deloitte conducted an exercise to validate the models in UOB, a leading FI in Singapore and Southeast Asia, where we found that ML was able to drive new levels of efficiency and effectiveness. For its transaction monitoring framework, UOB focused on the optimisation of detecting new, unknown suspicious patterns and to prioritise known alerts. The outcome was overwhelming, and it proved to be a step in the right direction with a 5% increase in true positives and a 40% drop in false positives. On top of that, we also noted that the name screening module saw similar positive results. To enhance the name screening process and to improve detection, the module was designed to handle a wider range of complex name permutations. Simultaneously, the module was also designed to reduce the number of undetermined hits through enriched ‘inference’ features and the inclusion of additional customer profile identifiers. For its name screening alerts, there was a 60% and 50% reduction in false positives of individual names and corporate names respectively. The cost is a small trade-off for the many benefits that FIs can reap from using ML.

Apart from the validation performed at UOB, there are multiple case studies in the industry evidencing the efficiency gains from implementing AI/ML. What we observed from a case study of a Hong Kong SAR subsidiary of a large international bank was that the bank sought to significantly reduce the number of human interventions required to clear alerts by leveraging on AI/ML. An ambitious target was set to develop a fully automated solution, which would not only assign a probabilistic ‘score’ to alerts (based on the likelihood of possible criminal behaviour), but also issue well-reasoned, AI-generated recommendations to either escalate or close each case. Data from previous name screening alerts, including the decisions made by analysts in the review process, was used to ‘train’ the ML model. This ‘training’ was repeated several times until the model began to produce promising results, and an internal proof-of-concept could be conducted to validate the solution. The main benefit of the FI’s ML-based name screening solution is that it greatly improves the efficiency of the investigation process and reduces the number of alerts that require manual intervention by an average of 35% (and in some cases up to 50% of jurisdictions). This, in turn, streamlines the review process and increases the time analysts can spend reviewing flagged alerts. What we can gather from the case studies of multiple FIs is that AI/ML has had a positive impact in improving the efficiency in name screening, reducing false-positive rates, and reduction in alerts requiring human intervention.

Notwithstanding all the potential benefits that it may bring, AI/ML in AML/combating the financing of terrorism isn’t a silver bullet yet. With the evolving capabilities and advancement in technologies, there is the possibility that we will see an uptick in the number of AI/ML models replacing the traditional AML framework in the future. Humans are still pivotal in developing these models – making sure they meet the regulatory requirements, evaluating the performance of the output, and making decisions on complex cases using predicted outputs. In order to gain the biggest return on their investment, FIs will need to ensure that their AI/ML solutions are designed with consideration to data quality and access, systems, processes, organisational structure, and available technologies and technology providers. Establishing diverse cross-functional and cross-regional teams and ensuring stakeholder buy-in will also be paramount to ensuring a successful outcome. In addition, robust governance frameworks, the establishment of ongoing training on financial crime risk management, and ongoing development of regulatory technology solutions will also ensure the continued success of AI/ML models.


Ms Radish Singh is the Financial Crime Compliance Leader for Deloitte in Southeast Asia. She is based in Singapore and  has extensive experience in compliance and AML, corporate governance, risk and financial services regulations.