Financial-Sector

The Human Rights Impacts of AI in the Financial Services Industry

April 2, 2025

| Blogs

 

By Sarah Ryan

Artificial intelligence (AI) is revolutionizing financial services, increasing efficiency, automating decision-making, and enhancing fraud detection. However, alongside its benefits, AI presents significant human rights risks, particularly for marginalized communities. Bias in AI-driven decision-making can lead to financial exclusion, job displacement, privacy impacts, and opportunities for fraud. Without proper oversight and governance, AI can also enable manipulative and predatory financial practices.  

While these risks are not new for the sector, the use of AI, including the increased use of generative AI, may exacerbate these risks due to its ability to rapidly scale biased decision-making, generate deceptive or manipulative content, and automate complex financial processes without sufficient oversight or transparency. This blog explores these risks, focusing on three key categories of risk – those related to the unintended consequences of AI, those related to intentional product misuse, and finally those related to a lack of governance. In laying out these risks, we underscore the need for strong governance and accountability mechanisms to identify, address, and mitigate these risks. 

1. Unintended Consequences 

Discrimination, Bias in Decision-Making, and Exclusion 

AI models used in credit scoring, loan approvals (e.g., mortgages), and fraud detection can reflect or even amplify biases present in historical data, disadvantaging vulnerable and minority groups. A major issue is that lower-income individuals often have less available data in financial systems, leading to less precise AI outcomes and potential discrimination. 

These biases can result in lower credit scores, fewer loan approvals, and worse loan packages for marginalized applicants. Additionally, AI-driven financial services may overlook non-traditional credit histories, excluding individuals without formal banking records—such as refugees or those with unstable incomes. When automated decision-making occurs without human oversight, the risks of exclusion are heightened. 

Automation and Job Displacement 

Generative AI-driven automation is rapidly transforming the banking industry, particularly in customer service, underwriting, and risk assessment. A report by Citibank predicts that banking may be the hardest-hit industry, with 54% of jobs potentially displaced by AI, with another 12% of banking jobs augmented by AI. These job losses are likely to disproportionately impact lower-income workers, deepening economic inequality and limiting access to stable employment.  

In addition, automation may reduce the availability of personalized financial services, as AI-driven chatbots and decision-making tools replace human customer service representatives. This shift could create barriers for individuals who require tailored financial advice or assistance navigating complex financial systems. Without proactive policies to retrain and support displaced workers, AI-driven automation may also exacerbate social and economic disparities. 

2. Product Misuse

Privacy and Cybersecurity 

AI systems rely on vast datasets, raising concerns about data breaches and unauthorized use of personal financial information. Weak safeguards could compromise individuals’ right to privacy and expose them to financial harm. 

Additionally, AI-powered financial systems are attractive targets for cybercriminals. Hackers can manipulate AI-driven systems to gain unauthorized access, potentially leading to large-scale identity theft and financial fraud. The use of AI to generate sophisticated phishing attacks, deepfake scams, and automated hacking techniques further increases the risk of financial exploitation. Furthermore, financial institutions may struggle to detect and mitigate these evolving cyber threats in real time, leaving consumers vulnerable to monetary loss and reputational damage. 

Fraud 

Malicious actors – including organized crime syndicates – can exploit AI-powered financial tools to engage in fraud and money laundering. AI can be manipulated to bypass traditional fraud detection methods, making it easier for criminals to exploit financial systems undetected. 

AI can also generate convincing fake identities or synthetic identities through different types of media, including AI-generated audio and video, which fraudsters use to open fraudulent accounts, take out loans, or execute illicit transactions without detection. If industry and regulatory measures fail to keep pace with these developments, AI-enabled fraud could become increasingly sophisticated and difficult to combat. 

3. Governance

Manipulative and Predatory Practices 

AI-driven hyper-targeted advertising can be used to promote high-risk financial products, such as payday loans, to vulnerable individuals. The hyper-personalization of these ads takes advantage of confirmation bias, aligning with consumers’ existing beliefs and preferences and pushing consumers to engage with the ad and buy the product. This may also exploit consumers’ vulnerabilities. Without human-rights informed guardrails, AI systems could exacerbate financial precarity by directing exploitative lending options toward those least able to afford them. 

Transparency, Accountability, and Grievance Mechanisms 

Many AI-driven systems operate as “black boxes,” making it difficult for consumers to challenge unfair decisions. If an AI system wrongly denies credit, insurance, or other financial services, users often have no clear path for appeal—and may not even realize an AI system made the decision. 

This lack of transparency makes it difficult for affected individuals to recognize and report AI-related harms, further entrenching systemic discrimination and financial exclusion. Company approaches to responsible AI must prioritize explainability, fair auditing practices, and accessible grievance mechanisms to prevent and address these issues. 

Recommendations

Article One has worked with the world’s leading AI developers to assess and mitigate human rights risks related to this technology. As part of those engagements, we’ve seen firsthand how AI is reshaping our world.  Our work with these companies suggests that strong governance frameworks, transparency measures, and effective grievance mechanisms are essential to ensure AI serves as a tool for empowerment rather than harm. This is no less true in the financial sector as it is for technology companies.  

Financial services companies should build on the knowledge and success of AI developers to ensure their use of the technology upholds corporate values, respects human rights, and improves the experiences of customers. Key steps financial companies should consider, include:  

  • Publishing a commitment to respecting human rights throughout a company’s business operations, including the development and deployment of AI systems 
  • Developing and implementing Responsible AI principles to guide how the company develops and deploys AI systems responsibly. 
  • Conducting Human rights assessments of a company’s development and deployment of AI systems, holistically and/or at the individual product level.  
  • Conducting Responsible AI assessments to identify and assess potential unintended consequences of AI use cases and develop guardrails to mitigate those risks.  
  • Publishing information on how the company deploys AI, including how the models are trained, assessed, used, and monitored.  

To learn more and explore what steps you and your company can take to advance your approach to responsible AI, please reach out at hello@articleoneadvisors.com.