AI-Blog_June_2025

AI and the Future of Work: Assessing the Human Rights Implications of Job Displacement

June 26, 2025

| Blogs | Responsible Technology

 

By Sarah Ryan and Cecely Richard-Carvajal

Introduction 

As artificial intelligence (AI) continues to evolve and scale across industries, it is transforming the world of work in profound ways. Organizations are using AI to automate routine tasks, enhance productivity, and drive innovation. At the same time, these shifts are raising concerns about workforce displacement, the erosion of job quality, and the long-term implications for labor market participation, particularly for early-career workers. From a human rights perspective, these developments warrant close attention. 

AI’s impact on the workforce is not occurring in a vacuum. It intersects with existing labor market inequalities and governance challenges. Entry-level roles in customer service, software engineering, administration, and even creative sectors are increasingly vulnerable to automation. While some companies emphasize the potential for AI to augment rather than replace jobs, in practice, many have begun to restructure workforces, scale back hiring, and decrease staffing in areas where AI tools are seen as capable of handling tasks with reduced human input. These trends have implications for the human rights of workers. 

Human Rights Risks 

While multiple human rights may be relevant when considering the impact that AI will have on the future of work, four that have been identified as most salient are: 

The Right to Work and Just Conditions (UDHR Article 23) 

The right to work includes not only access to employment, but also the opportunity to earn a living through freely chosen and productive work. As AI displaces roles, particularly at the entry level, access to this right may be compromised. Automation is increasingly targeting roles that once served as steppingstones for early-career professionals, creating the risk of a weakened talent pipeline and limited opportunities for growth and skill development. In some cases, AI is also altering the nature of existing jobs, with increased reliance on algorithmic oversight and performance metrics that may undermine job autonomy and reduce overall job quality. 

Moreover, the shift toward short-term, gig-based, or contract-based roles to support AI-integrated workflows may increase labor precarity. Workers operating in these models, including data enrichment workers, often lack access to basic protections, benefits, or long-term security, further challenging the realization of just and favorable work conditions. 

The Right to Non-Discrimination (UDHR Article 2) 

Bias in AI systems is a well-documented concern, particularly in hiring, performance management, and promotion. If left unaddressed, such bias can reinforce existing structural inequalities in the workplace, especially along lines of race, gender, disability, and socioeconomic status. When AI systems are trained on incomplete or non-representative data, or when decisions are made without adequate human oversight, there is a risk of perpetuating discrimination in ways that may be difficult to detect or contest. 

Additionally, the distribution of AI-related training and employment opportunities remains uneven and inequitable. Workers in under-resourced communities may have limited access to upskilling programs or tools needed to adapt to AI-driven changes, further entrenching disparities and limiting equitable access to future-oriented roles. 

The Right to Privacy (UDHR Article 12) 

AI technologies are increasingly being deployed to monitor employee performance, behavior, and communications. From productivity tracking software to biometric analysis, surveillance practices enabled by AI raise important questions about the limits of employer oversight and the right to privacy in the workplace. When monitoring systems are implemented without consent, transparency, or proportionality, they may infringe on workers’ ability to perform their duties free from undue intrusion. 

These concerns are particularly acute when workers lack information about how surveillance tools operate or have limited recourse to challenge their use. The risk is not only legal but also reputational, as perceptions of overreach can damage trust between employers and employees. 

The Right to Social Security (UDHR Article 22) 

As companies adopt AI to streamline operations and reduce costs, some workers may be displaced without adequate support or transition pathways. The right to social security entails protection in the event of unemployment, as well as the ability to access retraining and reintegration into the workforce. Yet current reskilling initiatives may not be keeping pace with the speed of AI-driven transformation, and safety nets vary widely across sectors and regions. 

This creates a potential gap between those who can adapt quickly to new AI-enhanced roles and those who are left behind. Ensuring that displaced workers are supported, not only through financial mechanisms but also through accessible, inclusive training programs, is essential for mitigating long-term economic exclusion. 

Next Steps 

The adoption of AI in the workplace presents a complex set of trade-offs. While the technology offers real opportunities for operational efficiency and innovation, it also creates new risks, especially for vulnerable or early-career workers. These risks intersect directly with internationally recognized human rights, including the rights to work, privacy, non-discrimination, and social security. 

Companies, governments, and civil society must work together to embed human rights into the very architecture of AI adoption—from workforce planning and training programs to algorithmic transparency and worker consultation. 

Key actions include: 

  • Protecting early-career roles by adapting them to new AI environments rather than eliminating them.
  • Investing in inclusive reskilling and upskilling programs that go beyond technical training to foster ethical reasoning and cross-disciplinary collaboration.
  • Maintaining human oversight in decisions that affect livelihoods, including hiring, evaluation, and dismissal.
  • Conducting human rights due diligence to anticipate and address potential harms before they materialize.
  • Creating clear, transparent communication channels with workers about how AI is changing their roles, if and how it is used in tracking their work performance, and what support is available.

As AI continues to shape the future of work, companies, policymakers, and civil society will need to engage collaboratively in designing governance frameworks that account for these rights. Doing so will not only help mitigate potential harms but also foster more inclusive, resilient, and sustainable workplaces in the long term. 

To learn more and explore what steps you and your company can take to advance your approach to responsible AI, please reach out at hello@articleoneadvisors.com.