Football player kicking ball against blue dots on black background

Artificial Intelligence & Human Rights in Sport

November 11, 2025

| Blogs | Business and Human Rights

 

Authors – Alison Biscoe, Marie Porchet, Article One (Paula Williams, Keri Lloyd)

Centre for Sport and Human Rights and Article One

About the blog 

On 06 March 2025, the Centre for Sport & Human Rights (the Centre) convened members of its multi-stakeholder network for its 1st quarterly meeting of the year. The meeting brought together diverse stakeholders to discuss how human rights can apply to and shape the use of artificial intelligence in sport. Alongside the Centre’s staff, speakers included Paula Williams (Associate Director – Advisory Services) and Keri Lloyd (Manager – Advisory Services) from Article One.  

This blog is part of a new, collaborative series from the Centre for Sport & Human Rights and was originally published on the Centre’s website. 

Overview 

Artificial intelligence (AI) is transforming the world around us, and sport is no exception. From scouting promising young athletes to preventing injuries, mitigating online abuse, and enhancing fan experience, AI is becoming an integral part of global sport, radically changing the way sport is practised, experienced and governed.  

This evolution opens opportunities to make sport safer, fairer, and more accessible, while creating new ways for fans to connect and for athletes to reach their full potential. At the same time, some unintended consequences present challenges that must be carefully managed to ensure technology strengthens, rather than undermines, human dignity, security, and well-being. 

Standards are increasingly highlighting emerging risks associated with AI and setting a benchmark of expectations to ensure it is used in a way that respects human rights:

These frameworks recognise that embedding human rights safeguards into AI and its use is key to realising positive impacts and mitigating negative ones. Together with the UN Guiding Principles on Business and Human Rights (UNGPs), they provide actors across the world of technology and sport with a roadmap for upholding human rights when developing, deploying, and using AI.

The growing intersection between AI, sport, and human rights

We are currently experiencing a transformative shift in the world of sport driven by AI, impacting how sport is played, managed and engaged with. AI is being leveraged across every facet of sport imaginable, it is already exceptionally prevalent. The AI in sports market is projected to grow seven times in the next five years. The impacts of this trend will be felt in both elite and grassroots sport, as well as in major sporting events and at a governance level.  The increased application of AI in sport will influence the professional and personal lives of athletes, fans, workers, journalists, officials and general spectators of sport. 

While AI is growing across sectors, its application in sport can be unique. It is changing current ways of working in sport and who is involved. It is also bringing new activities and new actors into the field. At this inflexion point, it is especially important to consider how and where human rights are being considered. Even when designed to promote positive social advances, AI-driven technologies can have unintended negative consequences on the fundamental rights of people and communities. Article One presented four use cases of current AI technologies to demonstrate both the depth and breadth of AI application in sport and the potential to impact rightsholders. These included: talent identification, athlete optimisation and injury prevention, physical security and curbing abuse, as well as fan engagement.

Talent identification 

AI tools are increasingly used for talent identification and scouting in sport. By analysing athletes’ performance metrics, movement, skills or cognitive decision-making ability, those tools can help predict future success and identify high-potential athletes – as illustrated by initiatives led by Intel and the IOC and by the NBA. For some of these applications, athletes, their parents or coaches can upload their own data to be accessed by AI scouting applications. These new scouting techniques can lower scouting costs and open opportunities for participation of more athletes, as well as clubs and leagues, particularly in remote regions. 

Nevertheless, with algorithms relying on extensive sensitive data collection, athletes’ right to privacy could be at risk in case of data misuse. Without clear standards on the responsible use of AI in talent identification and appropriate safeguards, minors may be particularly vulnerable, with a limited ability to give informed consent to data collection.  

As AI models learn from the data with which they are provided, over time, tools may align on very narrow definitions of ‘talent’ if safeguards are not in place. Athletes who do not conform to the normative profiles or performance patterns on which the AI tools have been trained could potentially be systematically disadvantaged and face discrimination. Particularly at risk are athletes from underrepresented groups within the sport and those whose physical attributes deviate from the AI model’s learned expectations of success. Biases and systemic discrimination within AI models can often be difficult to detect, as the complexity of the model often makes it difficult to understand how conclusions are reached.  

Additionally, extensive or excessive AI analysis of marginal factors, which may be inaccurate or beyond the control of the child athlete, could heighten anxiety and impact their general well-being and mental health.

Athlete optimisation and injury prevention 

AI is also used to provide personalised, data-driven insights that can optimise training programmes, refine strategies, analyse opponent behaviour and help detect or prevent injuries. This is increasingly common in professional sport, with projects led by clubs or leagues such as the NFL, PSG, or Liverpool FC.  

Certain clubs are indeed using AI-powered wearables and other AI technologies that leverage the athlete’s sensitive data to analyse flaws in players’ performance, fine-tune technique or predict patterns in athletes’ movement. These technologies often aim to prevent or reduce risks for athletes. Yet, this aim can be undermined by unintended consequences, which in turn risk eroding trust between athletes and the organisations deploying these tools. Privacy is a central concern, with ongoing conversations around the extent to which players can access or control how their data is used. Organisations like FIFPro have been active in defining and supporting athletes’ rights in relation to the ownership and control of their own data. 

For younger athletes in particular, the risks may be heightened owing to the power differential between the young athlete and their potential future or current employer. The athlete or their parent may waive their rights to privacy, without fully understanding the implications or feeling empowered to object. Athletes may consent to data collection in the belief that it safeguards their health, without realising that the same information could later be used against them, for example, in contract negotiations. In addition to privacy, another critical risk related to the use of these technologies is linked to mental health. While there has been significant research on athlete mental health in general, research as to how the use of AI-powered performance optimising tools can impact athlete mental health is limited. However, the stakes are extremely high in elite and professional competitive sports, and athletes are often under extreme pressure. With this context, it can be assumed that, as discussed with respect to child athletes, excessive AI analysis could potentially increase anxiety and reduce overall well-being. The impact will be heightened if the athlete perceives AI outputs as being highly influential to their future success or career progression.

Physical security and curbing abuse

Historically, AI-driven facial and audio recognition tools have predominantly been the remit of state actors. However, increasingly private actors such as stadium operators, clubs, federations, and event organisers are deploying these tools to address security issues, and risks of physical, verbal or online abuse. 

Recent examples include the Dutch Football Federation, who supported a pilot of technology to identify discriminatory and harmful chants in stadiums and the Social Media Protection Service led by FIFPRO and FIFA to prevent athlete abuse online. 

While reducing risk and curbing abuse is a net positive for sport, without robust safeguards, the application of AI for audio and facial recognition can compromise privacy rights and lead to intrusive surveillance. Decisions taken based on flawed or biased data can have serious consequences for fans resulting in profiling and/or misidentification that results in discrimination, exclusion or legal complaints. Athletes, staff, and stadium workers may also be subject to routine monitoring as facial recognition becomes embedded in entry systems and everyday operations, raising questions around privacy consent and workers’ rights. 

Fan engagement

Uses of AI for fan engagement are wide-ranging and constantly evolving but largely aim to provide fans with experiences that are more personalised, insightful, eye-catching and immediate. Prevalent examples include engagement with AI chatbots, targeted offers, generation of media coverage, and live analysis of athlete and team performance, among many others. Emerging uses involve deeper engagement, like interactive experiences through augmented reality and use of emotion recognition to tailor stadium advertising or to create live experiences based on how spectators are feeling in real-time.  

Here again, there are some common issues around potential biases, discrimination and privacy. Additionally, emotion recognition and personalised targeting could fail to respect privacy by collecting and using data without consent or be used in manipulative ways, for example, to target gambling products to vulnerable groups 

These risks can extend to athletes as well. Virtual and augmented reality experiences, for example, and other experiences aiming to bring fans closer to athletes raise concerns about the potential misuse and manipulation of the athlete’s image and voice. This could pose risks to athletes’ dignity, as has been highlighted in relation to deepfake technology.

Conclusion

Key human rights risks highlighted here include issues related to bias and discrimination, child rights and safeguarding, mental health and privacy rights. In addition to those risk areas, other risks raised include questions for the future of work and how AI can affect workers’ rights, as well as rights to intellectual property, including image rights and copyrighted information. 

The application of AI to sport is somewhat unique, in that while we have considered at length the potential risks to users of the technology, as AI use becomes more mainstream and ubiquitous across sport at all levels, we will need also to consider the potential impacts to those athletes who do not have access to technology, and the ways in which this may limit their potential or ability to engage with and participate in sport.  

To learn more and explore what steps your organisation can take, please reach out at hello@articleoneadvisors.com.