AI Brain

Can Regulation Catch up to AI?

February 23, 2023

Blogs Regulatory Alignment

 

By Peter Chapman

Hardly a day goes by without a front-page news story discussing how artificial intelligence is changing our world. Stories tell us that AI will transform business. And real estate, law, health care, education, and the list goes on.   

Increasingly these stories are mirrored by reports on the urgent need for oversight and regulation. Soon after ChatGPT—the viral chatbot from OpenAI—was released in late 2022, congressperson Ted Lieu penned an editorial suggesting a dystopian future awaits without scrutiny of AI technologies. Google rushed to release its own chatbot “Bard” following ChatGPT, however Google’s valuation promptly fell $100 billion when the demo made mistakes. Microsoft’s Bing chatbot release did not go much better, with reports suggesting it “went off the rails.” And that was before the chatbot told a New York Times reporter it loved him. 

Within this rapidly evolving landscape, AI companies are raising calls for participation, oversight and, in some cases, regulation. TIME recently ran an interview with the chief technology officer at OpenAI during which she called on regulators and governments as well as broader stakeholders to get more involved in the future of AI. Microsoft has “long supported” a clearer regulatory landscape for AI.  

More oversight and broader participation in decision making around AI is a clear goal, but what does it mean in practice?    

Expanding opportunities for collaboration between companies, rightsholders and regulators is critical. In early February I joined a fascinating conference hosted by William & Mary Law School focused on Problematic AI: Finding the Best Way Forward. The event brought together an important mix of companies, academics, regulators and civil society representatives to discuss how companies and policymakers are currently managing risk, what management gaps exist and what are the best strategies to address them.   

The conversation revealed some fascinating lessons and tensions.  

First, participants called for more sophistication in how we understand and define “AI.”  Currently, many different technologies and approaches have been lumped under the AI umbrella. With the publication of the National Institute of Standards and Technology (NIST) AI Risk Management Framework in February and the White House Blueprint for an AI Bill of Rights, some have pointed to inconsistencies in how AI is defined across the two frameworks. The proposed EU AI Act takes yet another approach. The definition of tools within these frameworks also differ, with a conference participant noting variation in the definition of “audit.” These definitions matter both to enable companies to align processes with expectations and for stakeholders to meaningfully participate in decision making.  

Second, participants discussed how regulation and standards are coalescing around a risk-based approach to AI oversight. While not always explicitly linked to the UN Guiding Principles on Business and Rights, proposed standards largely align with the UNGP approach. Companies are to assess risk based on both likelihood and severity—that is scale, scope and remediability. Emerging standards and proposals to regulate the development and use of AI appear to advance a risk-based analysis. The NIST Framework, for example, defines risk as a “composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” The proposal for the EU AI Act will introduce a system of classification of AI risk based on severity and probability of harm. Key stakeholders are pushing for the EU AI Act to more firmly establish a commitment to human rights due diligence from design to end-use. Companies, regulators and civil society need to enable a more robust and deeper discussion of how to interpret risk across countries and within communities where these technologies will be used.  

Finally, participants from companies, academia and civil society all pointed to the importance of inclusivity in design and implementation. A handful of individuals, at a relatively small number of companies, are making tremendously consequential decisions around the design and supervision of technologies that will impact many dimensions of our society. How can companies, regulators and civil society ensure diverse stakeholders and rightsholders are at the table? Several companies shared experiences on how they are designing their systems of governance. We at Article One shared our experience working with a range of companies to design and build systems of AI governance—from development of Ethical AI principles to responsible use. Building sustained partnerships, at scale, will be vital for ensuring that the companies at the forefront of building these technologies can do so in inclusive and rights-advancing ways.  

To learn more and explore steps your company can take to advance an inclusive and responsible approach to AI, you can reach us at hello@articleoneadvisors.com.