The Role of Corporate Governance in Responsible AI
August 8, 2024
| Blogs
By Sarah Ryan
As artificial intelligence (AI) becomes increasingly integrated into various aspects of business operations, the human rights implications of its development, customization, and deployment have come to the forefront. Companies must navigate the complex landscape of responsible AI to ensure that their technologies are fair, transparent, and do not infringe on human rights. Corporate governance plays a critical role in overseeing these human rights considerations, providing the framework and accountability necessary to guide responsible AI development and deployment.
Responsible AI programs should seek to identify, prevent, and mitigate a range of adverse human rights impacts that AI systems could have. While full human rights assessment of AI products should be completed, companies often recognize the need to address the most salient issues across all product types. These include:
- Preventing Bias and Discrimination. AI systems can inadvertently perpetuate biases present in their training data or have algorithmic bias inadvertently written into the system itself, leading to discriminatory outcomes in areas such as hiring, lending, promotions, image generation, online advertising, and predictive policing. Responsible and human rights-based oversight can help ensure that these biases are identified, prevented, and mitigated.
- Protecting Privacy. With AI’s ability to process and analyze vast amounts of personal data, there is the potential of unauthorized access, misuse, and breaches of sensitive personal information. To prevent this, companies should extend their privacy programs to include privacy risks related to AI. Responsible AI governance can help ensure that robust AI data protection measures are in place.
- Ensuring Accountability. Clear accountability mechanisms are necessary to address the consequences of AI decisions, particularly in critical sectors like healthcare and finance. Corporate governance structures can delineate responsibility and help ensure proper oversight.
To address and prevent these risks, companies developing and deploying AI should put in place responsible AI governance structures. Each company’s governance structure will look different depending on the company’s size, their relationship to the AI systems and tools, and the risks associated with the AI they develop and/or deploy. That said, through our work with leading AI developers and deployers, Article One has supported companies in establishing the following best practice approaches:
- Establishing Responsible AI Principles. Companies should develop comprehensive AI principles that outline the company’s approach to issues like fairness, transparency, and accountability. These guidelines serve as a foundation for all AI-related activities and help internal teams make informed decisions on their development and deployment of AI.
- Case Study: Google has established a set of AI principles to guide its development and use of AI technologies. These principles emphasize fairness, privacy, and accountability and also outlines the AI applications the company commits to not pursuing.
- Creating an Internal Responsible AI Committee. An internal responsible AI review board or committee can oversee AI projects, ensuring they adhere to the company’s AI principles and guidelines. This committee should include diverse perspectives and include internal stakeholders with an understanding of the range of salient risks related to the company’s use and/or deployment of AI, including product and engineering, human resources, legal, diversity and inclusion, human rights, and procurement.
- Case Study: IBM’s AI Ethics Board is made up of a cross-disciplinary body to “support a culture of ethical, responsible and trustworthy AI throughout the organization.” Its mission is to support a centralized governance, review, and decision-making process related to IBM’s work with AI across all of its policies, products, research, and services.
- Case Study: Microsoft’s AETHER Committee brings together senior leaders and experts to discuss and address ethical challenges in AI. The committee reviews AI projects and advises on best practices for ethical AI development.
- Implementing AI Risk Assessments. AI risk assessments can help to identify and address potential risks – including human rights risks—related to the development or deployment of an AI system or tool before they cause harm. This risk assessments can be integrated into existing risk assessment processes, including legal and privacy assessments, or can be standalone responsible AI assessments. These assessments can be framed around the company’s AI principles, focusing on the issue areas identified as most salient to the company.
- Case Study: Atlassian’s Responsible Technology Review Template translates their Responsible Technology Principles into standard practices that can be used by teams that develop or make decisions about technology. The template is organized around their principles and asks the person/team filling out the template to consider potential unintentional risks related to their proposed product, their current planned mitigations, and where the gaps may be. This risk assessment tool is then reviewed and assessed according to the product’s risk level.
- Promoting Transparency. Companies should be transparent about their AI practices, including how data is collected and used, the decision-making processes of AI systems, and the measures in place to protect users. Transparency builds trust with stakeholders and helps hold companies accountable.
- Case Study: Twilio offers AI Nutrition Fact Labels to operationalize its commitment to transparency and provide customers with the tools to make their own use of AI more transparent. The labels are meant to demystify for consumers how their data is being used and empower them to make informed decisions about which AI tools they want to adopt.
- Providing Employee Training. Employees in roles relevant to the use of AI should be trained on the company’s approach to responsible AI, including AI policies and processes. This ensures that everyone involved in AI development and deployment understands their responsibilities and the human rights implications of their work.
- Case Study: Google provides AI Principles training to help put their principles into practice. As of 2022, more than 32,000 employees engaged in the training. Training options include a Tech Ethics self-study course, the Responsible Innovation Challenge – “a series of engaging online puzzles, quizzes and games to raise awareness of the AI Principles and measure employees’ retention of ethical concepts, such as avoiding unfair bias,” – and the Moral Imagination Workshop.
Corporate governance is essential in overseeing responsible AI, providing the structure and accountability needed to ensure rights respecting AI practices. By establishing clear guidelines, conducting regular due diligence, promoting transparency, and fostering a culture of human rights awareness, companies can navigate the complex landscape of responsible AI and contribute to the development of technologies that benefit society while respecting human rights.
If you have questions or would like to learn more about how your organization can establish responsible AI development and deployment strategies, you can reach us at hello@articleoneadvisors.com.