This article was published in partnership with Braithwate
We all interact with companies providing us financial products and services. Most of these interactions appear seamless. But all financial transactions, whether with traditional banks or fintech companies, are predicated on identity verification (IDV).
Regulations require financial services providers to know the identity of an individual before they are onboarded as customers (KYC rules). This helps to prevent money laundering and other illegal activity.
Previously, customers visited bank branches for identity verification and document validation.
The proliferation of online banking, and the rise of fintechs with no physical branches, has created the need for remote IDV.
IDV technology allows the individual to prove their identity by submitting images of their face and identity documents. It replaces the role of the bank teller, who previously would have checked the document against the individual who turned up at the counter, by seeking to determine the likelihood that:
After this is completed, the IDV provider generates a report documenting the likelihood that the identity document belongs to the individual, alongside any red flags that may have been triggered by the algorithm.
The bank must then decide how to proceed, based upon the IDV report and its own internal thresholds.
The ethical implications are complex.
Whilst such technology increases efficiencies, it also poses new risks. If online IDV is the only means to access the product and the algorithm fails to correctly match the individual, it means the individual will have no other channels to access the service with that provider. This creates barriers to participation in banking and time-critical products such as access to credit.
The ML models that enable IDV perform two fundamental jobs:
To conduct these tasks, the ML model is trained to assign a numerical feature vector to the facial image that is presented: images belonging to the same individual will generate vectors with high similarity scores, and those that represent different people will display vectors with a low similarity score. The similarity score will be underpinned by an agreed threshold: any images equalling or higher to the similarity threshold will be deemed as belonging to the same person, those that are lower will fail the similarity test.
Whilst it is reported that on average such ML models perform better than humans conducting the same checks thanks to advances in deep learning models and computing power, they are still prone to inaccuracies. This is because the performance of ML models is contingent on the quality of their training data.
Companies relying on ML tools to power their IDV must consider:
The quality of the datasets will be driven by both intrinsic and extrinsic factors. Intrinsic factors include diversity of gender, skin tone, age and facial geometry, whereas extrinsic factors include the background environment, image quality, facial expression and facial decoration.
A training dataset that is insufficiently representative can lead to poor model performance on under-represented populations, even if global metrics suggest strong performance.
If the dataset is deficient in key intrinsic factors such as skin tone and gender (and particularly an intersection thereof), the model will present differential performance on the individuals that are represented by those factors, leading to algorithmic bias.
Algorithmic bias results into two outcomes, detrimental to both the individual and the business:
The only effective solution is to implement robust AI Risk Management systems and processes.
AI Risk Management is the process of identifying, verifying, mitigating and preventing AI risks. Concrete steps must be taken at each stage of the AI lifecycle, to reduce the likelihood of bias.
Risk management approaches must be adapted to reflect the novel risks AI poses. For example, as AI systems continuously learn and evolve, and performance tends to decay over time, they must be carefully monitored on an ongoing basis. This requires an automated and scalable solution.
Managing AI risks also requires technical assessment of the AI system’s code and data. Best practice entails the independent auditing, testing and review of AI tools against bias metrics and other industry standards.
We have the technical expertise to assess the quality and performance of ML models, and the representativeness of their training datasets, to support IDV providers in mitigating bias risks. We also support businesses in designing and establishing policies and processes to effectively govern the use of AI, such as training, governance and accountability, and other operational controls.
To learn more about how you can identify and mitigate AI bias issues, contact us to request a demo!
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started