It’s been a long-held mantra at Article One that all companies are technology companies. Stakeholders and regulators around the world increasingly agree. Societal and regulatory attention is increasingly focused on how technologies, including artificial intelligence (AI) and machine learning, are deployed across a broad range of industries.
A recent New York City law called the Automated Employment Decision Tool Law (“AEDT”), for example, requires all NYC employers that use AI tools in their recruitment process to conduct an annual bias audit. The European Union is working towards comprehensive AI regulation that would categorize and govern the use of “high-risk” AI, including in critical infrastructure, employment, law enforcement and administration of justice. The White House recently rolled out a framework for an AI Bill of Rights in the United States that seeks to establish baseline protections for the use of automated systems.
These political and regulatory efforts will impact companies across a wide range of industries and shine a light on the ways in which technology use—if not done responsibly—can pose risks of harm to users and society. Taking employment as an example, seventy-five percent of US employers rely on AI technology to help screen job applicants with an estimated 27 million US workers filtered out of hiring processes by AI technology before any human review. Those that are being filtered out are often caregivers, veterans, immigrants, people with disabilities, and those formerly incarcerated. That means that 75% of companies are relying on a technology that could pose specific harm to vulnerable groups seeking employment.
Taken together, the AEDT law, the EU AI Act, the White House Bill of Rights, and countless other regulatory proposals signify that we are entering a new phase of expectations for the use of technology. All companies—those directly in the tech sector and those who deploy it—should be prepared to understand and interrogate risks associated with their use of technology and AI.
We at Article One have worked with a wide range of companies to move towards more ethical and accountable use of technology. We support companies that build AI to do so in principled and transparent ways. We help strengthen company due diligence to ensure that technologies are deployed in ways that advance as opposed to undermine rights. And we facilitate collective learning to ensure no company or individual must navigate these pressing challenges in isolation. As part of those engagements, we have learned the following lessons when it comes to preparing for emerging regulation in the technology space:
Participate in policy discussions. As policies are developed, companies should engage proactively to help inform regulations in ways that support innovation and protect all those potentially impacted by the development and deployment of technology. This should include deepening cross-functional collaboration across the company, including within government affairs, legal, product, responsible innovation, and human rights.
Future proof early and often. As regulation emerges around the world, companies should be prepared to understand trends, divergence, and convergence. Companies can conduct robust gap analyses against existing and emerging expectations to understand where existing practices may align or fall short. Where there are gaps, the structure of internal processes will determine success. Companies should work with internal champions and external experts to develop management practices that are scalable and fit for purpose.
Go beyond compliance. Emerging regulation offers the possibility of more predictability and more transparency. Regulation should be used as a tool to promote rights-compatible approaches to the development and use of AI technology, rather than seen as a check-the-box compliance tool.
There is much to build on. Companies across industries are developing approaches to ensure ethical and responsible use of technology. Companies who use or rely on AI—whether for hiring, marketing, safety, or security—increasingly must ensure the technology they use has been developed and will be deployed responsibly. Implementing the steps outlined above will help to lay a solid foundation.
As regulation increasingly demands that companies understand and assess how AI and new technologies are used across value chains, we are ready to help. Please reach out at email@example.com.