Shared Lessons for Promoting Human Rights in Generative AI
June 13, 2024
| Blogs
By Sarah Ryan
In the rapidly evolving landscape of artificial intelligence (AI), companies are increasingly recognizing the need to evolve their responsible AI strategies to address the unique challenges and opportunities presented by generative AI. This subset of AI, which includes technologies capable of creating content like text, audio, and images, may pose distinct human rights and operational considerations. These risks include discriminatory outcomes due to biases in training data, inadvertently exposing or misusing private or sensitive data, or creating highly convincing fake content that can be used to spread misinformation.
As companies explore how generative AI can be used to enhance their businesses, many are also developing new and adapting existing robust frameworks to ensure these advancements are aligned with human rights standards, regulatory requirements, and societal values. By prioritizing human rights and responsible AI considerations, companies can not only mitigate potential risks but also build trust with consumers and stakeholders.
This spring, representatives from companies participating in Article One’s Business Roundtable on AI and Human Rights discussed how to adapt and evolve approaches to responsible AI in response to the rise of generative AI. Companies are pursuing some common approaches, including in relation to governance, policy, and process.
How are human rights and responsible AI teams responding to this moment?
First, they’re not reinventing the wheel. We heard from many companies that they were focused on updating, not reinventing their responsible AI programs to meet new and exacerbated risks of generative AI. While generative AI does prevent novel risks, existing responsible AI programs are designed to address many of those risks. Generative AI is certainly new, but existing governance structures to manage technology risks are increasingly robust across many companies.
Balancing innovation and corporate responsibility. Roundtable members shared that responsible AI programs can support and enable effective innovation. Proactive risk assessment and management tailored to foreseeable risks can ensure product teams can efficiently integrate generative AI technologies while being confident risks are effectively managed.
Some of the concrete ways in which companies are adapting to a new world of generative AI include:
- Adapting governance structures. A few companies have stood up generative AI committees to address both the risks and opportunities of generative AI, including strategizing how the company can best leverage the technology. New functions, like human resources and diversity and inclusion teams, have entered responsible AI conversations which both raises coordination challenges and enables more cross functional collaboration.
- Maintaining existing responsible AI principles. Interviewees largely stated that their existing responsible AI principles were not changing due to generative AI. Indeed, they designed their AI principles to be evergreen, regardless of the specific technology or use case. However, specific approaches, processes, and governance mechanisms activating the principles – including how they describe them in their public AI principles documents — may need to be updated.
- Identifying legal risk. At the policy level, responsible AI programs are increasingly considering corporate legal risks, including around intellectual property, copyright, and privacy alongside rightsholder-focused human rights impacts. While some companies’ responsible AI programs include potential legal risks related to generative AI, other companies keep those risks segregated from human rights risks related to generative AI.
Companies have also evolved their responsible AI processes to respond to novel and exacerbated risks of generative AI and the increased volume of generative AI assessments, including:
- Updating AI risk assessments. Many Roundtable member companies are updating their approaches to risk assessment to include questions about specific generative AI risk. These responsible AI assessments are increasingly integrated alongside other core product assessment, including legal and privacy. To accommodate a steep increase in AI risk assessments – much of the increase is attributed to new generative AI projects — many companies are updating their review processes to make them more efficient and scalable. Additionally, some companies have been able to use this increased volume to push for more staffing resources for their teams to review and evaluate these assessments effectively and efficiently.
- Gaps in responsible AI procurement. Companies suggested that responsible procurement processes have largely not caught up with the potential risks of generative AI from third parties, including through vendor assessments and responsible AI requirements. Some companies even described pushback from some vendors when asking about their data sources and testing practices.
- Risks related to data enrichment workers. Companies are starting to consider human rights risks related to data enrichment workers, the people tasked with enhancing, refining, and augmenting raw data to make it more valuable for AI. Risks may include excessive working hours, low wages, unstable work, and mental health risks from reviewing graphic content.
As companies push to integrate generative AI, a commitment to human rights-respecting technology remains paramount. Companies that build a culture of responsible product development set themselves up for success. By integrating responsible guidelines, transparent practices, and proactive risk management from the outset, they not only mitigate potential pitfalls but also foster trust and loyalty among rightsholders and stakeholders. A forward-thinking approach to human rights and generative AI can ensure innovation is sustainable, socially responsible, and well-received in the market.
For more information on Article One’ s work in this area, please get in touch with us at hello@articleoneadvisors.com.