As Artificial Intelligence (AI) becomes more embedded in financial services—from credit scoring to customer support—its potential to improve consumer outcomes is growing rapidly. But with this promise comes risk, especially for women and other marginalised groups.
In our response to the UK Parliament’s Public Accounts Committee inquiry on AI in financial services, we at the Centre for Protecting Women Online highlight both the opportunities and challenges AI presents. Done well, AI can enhance financial inclusion by analysing non-traditional data to support underserved consumers. It can also personalise services, increasing accessibility for those in remote areas or with disabilities.
However, if left unchecked, AI risks reinforcing historical biases. AI models trained on historical financial data can inadvertently replicate and even amplify embedded discriminatory practices. With data reflecting decades of prejudice, particularly against women and minoritised groups, these systems risk reinforcing disparities. For example, studies have revealed that gender biases can emerge in AI-driven credit scoring through proxy variables such as part-time work patterns or ZIP codes—a reflection of remaining societal inequities—and ultimately result in biased decisions against women. Worse still, even if AI is initially unbiased, AI can drift over time or be manipulated, requiring constant monitoring and adjustment.
To address these risks, the response emphasised the importance of both designing AI with built-in fairness and ensuring exhaustive and continuous oversight. There is a strong case for sharing and using sensitive data—not to discriminate, but to audit and mitigate bias. Controlled access to information like gender or ethnicity, accompanied by robust security measures, could enable auditors to validate and correct unfair outcomes. Regular algorithmic audits, transparent decision-making processes, and clear consumer recourse mechanisms are vital safeguards.
Another challenge lies in defining fairness itself. Competing fairness definitions often conflict, and often it’s impossible to satisfy them all at once. Without clear regulatory guidance, financial institutions are left to make difficult ethical trade-off decisions alone. Therefore, we emphasise that clear guidance is needed on which definitions of fairness to apply and what level of bias, if any, can be tolerated. Decisions about these standards should involve collaboration between responsible AI experts, domain specialists (such as financial professionals), and regulators.
With deliberate safeguards and collaboration between regulators, developers, and researchers, AI can become a tool not just for innovation—but for equity. As the UK shapes its AI future, fairness must be built in from the start.
This blog post was authored by Dr. Ángel Pavón Pérez.