Why Responsible Innovation Principles Are Important

March 15, 2023

Blogs Responsible Innovation


By Sarah Ryan

Here at Article One, we help companies develop, launch, and manage their Responsible Innovation programs. In doing so, we recognize that responsible approaches to product development benefits both people and business, (link). A key part of an effective responsible innovation program is the principles which guide its focus and strategy. These holistic principles help companies set a north star in their development of products that “do no harm” – or mitigate as much harm as possible.  

Responsible Innovation Principles (RI Principles) have built on the efforts of many companies to set ethical principles for the design and use of artificial intelligence (AI). Ethical AI principles created by companies like Google, Microsoft, and IBM, are helping to guide the industry toward a more responsible approach to artificial intelligence. Unlike Ethical AI Principles, however, RI Principles are designed to span all types of technologies, not just AI. These principles recognize that all technologies can have unintended consequences on users, bystanders, and society and it is the responsibility of the companies creating those technologies to identify and mitigate those risks. Salesforce, for example, has created a set of principles called “Our Guiding Principles” that applies to all the products they create. The company used these principles to build a framework to answer tough questions about their technologies and their potential uses and are able to make more informed, strategic, and consistent decisions because of this central set of principles.  

While relatively new, more and more companies are launching their own RI Principles because they understand the benefits they can bring, including:  

  1. Providing benefits to users (and the company). Principles like accessibility, safety, and inclusivity can provide real benefits to users by expanding the type of users engineers explicitly design for. This can make products usable to people who might have not have otherwise used the product, regardless of their age, digital literacy, skin color, hair type, gender, etc. Meta’s Responsible Innovation Principles, for example, commit the company to designing hardware inclusively, rather than one-size-fits all. This means that they design their VR headsets to be accessible by people with different sight and movement abilities, allowing more people to buy and use their hardware.  
  2. Offering guidance to teams in cases of trade-offs. Most product teams have internal objective goals like revenue generation or monthly active users. However, when products designed to meet those goals have the potential to cause harm to people, RI Principles help teams to make the trade off to either change the goals or redesign the product to reduce risk. This can include a new feature that would incentive people to spend more time in-app but may create safety risks if they are incentivized to use the app while driving. A principle focused on safety would help teams prioritize redesigning the feature to disincentive product use while driving in order to maximize user (and bystander) safety.  
  3. Demonstrating to external stakeholders, including users, regulators, civil society, and the media the company’s commitment to responsible design. This can be especially effective when the responsible innovation principles are coupled with details on how they are operationalized internally, including how they are baked into the product development lifecycle. Google, for example, has published its AI Ethics review process along with articles on how they implement their AI Ethics Principles to underscore that their principles are deeply embedded in the company’s operations and not just for show.  
  4. Providing a blueprint for regulation. Companies that take the lead on publishing external RI Principles have the ability to inform industry standards and future regulatory requirements. We saw this with the European Union AI Act, a proposed regulation to introduce a legal and regulatory framework for AI. The development of the AI Act was deeply informed by the standards many tech companies had already set for themselves on the development and use of AI. Many of the requirements for companies written into the act — like being transparent about how their AI is developed and explainable to those impacted by the AI, respecting human rights in their AI development,  and ensuring their systems are overseen and reviewed by humans – echo some of the most common themes in companies’ own Ethical AI Principles. Indeed, future regulation on product development may look to RI Principles for guidance.  

While this work is new, it is also growing. We are very excited to see companies start to put a human rights lens on their product development lifecycle to ensure the products themselves are rights respecting for all users, bystanders, and society.  

If your company is interested in developing Responsible Innovation or Ethical AI principles, or is interested in developing a responsible innovation program, please reach out at