Why Human Rights?

THE HUMAN RIGHTS LENS FOR RESPONSIBLE AI

The human rights framework is a powerful tool to promote the development and deployment of responsible AI technologies. As a global normative framework, human rights offer the following benefits:

  1. Human Rights Defines the Universe of Harms: AI can negatively impact society in new ways. However, the harms caused by these new technologies are not new: they impact the same rights outlined in the international human rights frameworks. For example, AI can affect privacy in new ways – but the fundamental right to privacy is already enshrined.
  2. Human Rights is Globally Accepted: The human rights framework already exists and is widely agreed upon. It’s “holistic, detailed and predictable” accounting for all aspects of human well-being and providing guidance on the interdependence of the state duty to protect human rights, and businesses’ responsibility to respect them.1 Unlike ethics which is subjective and constantly being redefined, human rights is standardized and clearly outlined. That allows companies to use a shared language to facilitate understanding and establish not only a road map for action, but also a moral compass.
  3. Human Rights includes a State Duty: A geopolitical system that is rights-respecting provides a safer ecosystem in which AI can flourish.

On this last point, the former High Commissioner for Human Rights, Zeid Ra’ad Al Hussein used the example of flight to illustrate the need for a rights-respecting world. When flight was first invented it allowed for people to travel to foreign and distant places bringing the world closer together. However, when WWII broke out, flight suddenly became the reason the atomic bomb was able to be dropped. The same technology resulted in massively different outcomes.

As the High Commissioner wrote: 

When the global order is improving — when there is peace and prosperity, liberal democracies are expanding, repression is withering away, and human rights are being honored — chances are technology will generally be put to good use. If the situation is the opposite — when liberal democracies are engulfed by war — technology will become a partner of bad intentions

— Zeid Ra’ad Al Hussein, The Washington Post, 2019

The human rights framework places the human in the center of the decision making. As Professor Stuart Russell, a computer scientist at the University of California, Berkeley, has argued: we need to build AI that exists not to achieve the AI’s objectives, but to achieve our objectives as the human race. When deciding what our objectives are, we need to ensure the dignity of our humanity remains front and center. 

CONTRIBUTING TO OUR COMMUNITY

This website is a growing resource for practitioners to learn how leading companies are currently addressing human rights impacts related to AI through case studies and examples. To contribute your case study or work, please contact us at hello@articleoneadvisors.com.