Unni

Five Questions for Responsible Innovation Leaders: Google’s Unni Nair

February 15, 2023

| Blogs | Responsible Innovation

 

By Sarah Ryan

The next installation in our series “Five Questions for Responsible Innovation Leaders” profiles Unni Nair, Senior Research Strategist on Google’s Responsible Innovation team. Unni has a deep background in the sustainability and supply chain fields and he brings the experience and best practices from that work to his work helping Google develop artificial intelligence (AI) responsibly. In this interview, Unni discusses how Responsible Innovation principles can be used to create criteria by which to measure the impact of responsible innovation work. He also provides advice to people hoping to start their own responsible innovation programs that includes external reporting frameworks to ensure transparency and accountability.  

 

1. Welcome to Article One’s Interview Series! You have a sustainability background. How did you translate that into a role in responsible innovation? What is it about ethical or responsible innovation that has captured your imagination?                 

Sustainability runs deep within the DNA of my family’s history of careers and purpose, so there was never any question about the direction of my profession. As this newly formed field of Responsible Innovation, which combines my professional background in social and environmental responsibility and tech, started to take shape, it just seemed like the natural evolution of my career.   

Before I joined the Responsible Innovation team at Google, where I focus on supporting the responsible development of artificial intelligence (AI), I spent more than a decade working in Sustainability Advocacy, Policy, and Operations across multiple sectors.  In my previous roles, I tackled challenging topics ranging from modern forms of slavery to the dystopian environmental impacts of over industrialization in cities around the world.  These were deeply moving experiences that were often addressed with reactive, tactical, and analog solutions in a race to stop the worst impacts.  It could be discouraging at times and drove me to search for better approaches. Fortunately, I had the privilege of learning from great mentors who helped me understand that many of the root cause issues are enabled by a web of technologies and complex systems which increasingly impacts, often unintentionally, society’s ability to meet sustainable development needs.   

Looking back, in many ways I was working on Responsible Innovation before it was a career.  I saw the rapid advances in technology as an opportunity to bring new ways to solve entrenched problems in the Sustainability field.  After learning about emerging spaces like the Internet of Things (IoT) and Artificial Intelligence (AI), I taught myself to code in Python.  While in my Supply Chain Sustainability career, my team and I built Machine Learning (ML)-based chatbots to create process efficiencies to manage environmental and human rights impacts.   Unfortunately back in 2017, the vision of using AI for social good at scale was ahead of its time and led me to realize this technology revolution was still closed off to most people outside of Silicon Valley.    

In 2018, Google’s CEO was one of the few executives publicly evangelizing the importance of AI to society and the need to build this technology responsibly through the company’s AI Principles.  Google was already a leader in Sustainability, so seeing such a public, ethical commitment to advanced technology was a clear signal that joining the Responsible Innovation team would be a profound opportunity to scale my impact. 

My hope is that these technological advances help to eliminate the worst forms of human and environmental exploitation, but doing so will require applying many of the lessons learned from the Sustainability field.  Responsible Innovation has a major role to play in helping to reimagine our relationship to technology and consumption, which, in combination, is increasingly where end use impacts will occur.  We have a unique opportunity to influence how increasingly intelligent technology will drive society toward (or away from) a more sustainable future – and that is both humbling and very exciting.    

 

2. How can people inside a company make the business case for investing in responsible innovation?  

In the next few years there will be a lot of trial and error and reinvention of business models that will eventually have a positive financial impact for companies who thoughtfully integrate AI into their strategies.  AI and related technology could lead to lower costs of goods and services, including greater efficiencies, accuracy, and productivity.   

From startups to large multinational corporations, using increasingly sophisticated AI will need to include creating safe and responsible guardrails and forms of technological exploration or it will become a risk to both business, society and the environment.  As was the case with Sustainability, it will be the companies that proactively and responsibly manage their impacts that will differentiate in the market and earn public trust.  

Everyone within a company – from the executive leadership to coders, marketers, and every role in between – can become champions of AI ethics by taking accountability within their own job to become familiar with Responsible Innovation principles and make sure their interactions with technology are done so with social and ethical values in mind. Increasingly, we will see AI touch most areas of an organization. 

 

3. How can we measure how ethical or responsible a technology can be?  

It is hard to forecast with granular certainty how ethical a technology is in the real world, but at Google, there are definitely categories of harms that are informed by our AI Principles to which we can ascribe certain levels of risk in a given use case.  A lot of research and diverse, expert thinking has gone into forming our Principles and conducting holistic reviews as a way to measure the responsibility of any one technology or solution, which is ever-evolving.  It is imperative that companies start by establishing ethical principles and criteria about their own and adopted technology, especially as systems grow and become more complex – sometimes evolving in ways that humans have yet to understand.  

There’s also an opportunity to leverage existing approaches such as enterprise risk management frameworks.  Measuring how ethical or responsible a technology is can be tricky given the scale and variability of AI, but as the Responsible Innovation field matures and sets precedents, it’ll be easier to develop more impact metrics versus effort based ones. 

What makes Responsible Innovation unique and exciting is that we not only have to consider risks of harm, but also anticipate future social benefits if or when a use case is deployed in the real world.  To do so, risks of not doing anything at all need to be considered, as well as how the status quo could be harmful which could impact societal progress.   

AI and Responsible Innovation – and technology as a whole – must play a much larger role in achieving measurable progress against internationally accepted frameworks such as the United Nations Sustainable Development Goals (SDGs).  Producing measures of success beyond profits is critical to sharing the benefits of AI.  

 

4. If you were giving advice to someone building a responsible or ethical innovation program today – especially at a non-tech company — what are the 2-3 things you would guide them to do first to ensure a successful program?             

First, this work is imperative, so I would applaud their efforts. There are strategic, regulatory, and ethical reasons to track and assess the potential real world impacts of AI, including everything from ML models to datasets. Rigorous documentation is critical to avoid “blackbox” situations in the future as organizations go from experimentation to launch.  As AI becomes more sophisticated, documenting ethical review practices can enable safe and responsible exploration across any industry or company. 

Second, product requirement documents should include ethics considerations which, if possible, should reflect feedback from a diverse set of multidisciplinary experts, with impacts tracked and measured over time. Conduct an ethics analysis of development and launch roadmaps to understand where current or future risks to users and society might occur.  Proactive assessments early in the product lifecycle could help avoid your highest risks and identify low hanging opportunities to ethically align business and consumer needs with regulatory and societal expectations.  

Lastly, larger companies should consider externally reporting on their Responsible Innovation practices, while ensuring claims can be backed up with data to avoid the “green washing”-like phenomena we saw in the Sustainability field.  “AI ethics washing” won’t be a luxury given how quickly real world impacts and disruption could occur with technology.  Internal evaluation and reporting can be a good starting point to document evolution if a company is not yet ready to publicly report progress. Reporting can serve as a good indicator across the organization, within technical and non-technical teams, and also inform leadership in making critical decisions. Above all, due diligence, transparency, and authentic leadership will be key. 

 

5. Where do you see the field of Responsible Innovation five or ten years down the line? 

Public understanding of AI seems similar to awareness of Sustainability issues a decade ago, but that is rapidly changing.   Corporate accountability has come a long way due in large part to sustainability, but it will become a societal priority with the exponential impacts of AI.  The longer term question of an AI-driven world will increasingly be how a business delivers holistic value to society.   Companies will be held accountable to equitably spread AI’s benefits with broader sustainable development priorities in mind.   

What is happening with technology and technology-enabled solutions and products is exciting, and the pace of development is only going to get faster. AI has suddenly been opened up for discovery and experimentation to the public in ways that didn’t seem possible even a few months ago.  Responsible Innovation will not just be important to large tech companies, but must be integrated into the mindset of each individual experimenting or producing with AI.  As barriers to usage continue to fall, hopefully a sense of agency and responsibility for the greater good over becomes mainstream.   

This is our opportunity to reimagine the role of business and technology to deal with the same macro factors such as resource scarcity, social inequities, and the existential threats to humanity due to climate change that make the Sustainability field so important.  Even if one business is less reliant on advanced AI technology, chances are that many other businesses it interacts with, e.g., suppliers, customers, users etc., are leveraging AI. We may eventually see Responsible Innovation spread across value chains, similar to how we measure greenhouse gasses or environmental impacts in scopes beyond one company’s operations and impacts. Ultimately every individual will be empowered through technology like never before with the tools to transform society toward a more just, humane, and truly sustainable future.