Kathy Baxter

Five Questions for Responsible Innovation Leaders: Salesforce’s Kathy Baxter

January 11, 2023

Blogs Responsible Innovation

 

By Chloe Poynton

Today, we’re excited to launch our new series: Five Questions for Responsible Innovation Leaders. Over the next few months, we’ll be profiling leaders from business, civil society, and government who are working to advance responsible innovation.  The series is designed to spark discussion within this emerging field, showcase best practice approaches, and share lessons learned.

Our first profile is of Kathy Baxter. As Salesforce’s Principal Architect for Ethical AI, Kathy has helped establish Salesforce as a leader in the responsible design and deployment of technology. Below she tells us about the moment she realized the potential for applying psychological and physiological principles to the engineering and design of products and systems and the lessons she has learned from building Salesforce’s Ethical AI program.

 

  1. Welcome to Article One’s Interview Series! You have a background in both psychology and engineering – a very unusual combination – and have spent your career at some of the world’s leading tech companies. What is it about ethical or responsible innovation that has captured your imagination?

I tried a number of engineering majors at Georgia Tech (e.g., mechanical engineering, electrical engineering, ceramics engineering) and was terrible at all of them! I loved my psychology classes but I had no idea what kind of career I could make with a psychology degree. One day my Industrial Psy professor asked who my advisor was and why he never saw me around the psychology department. I burst into tears about my inability to find the right major. He asked what I liked about engineering that kept me pursuing it and I said it was the aspect of building or creating something. I replied that I wanted to make things for people. He then proceeded to tell me about Human Factors Engineering (HFE) which is about applying psychological and physiological principles to the engineering and design of products and systems. I immediately knew that THIS was what I wanted to do! Both my undergraduate program in Applied Psychology and my graduate studies in HFE had a heavy emphasis on research ethics and focusing on putting humans at the center of your work. This has guided the user research work I’ve done over the last 20+ years and now the responsible AI work I do today. At Salesforce, I am able to work collaboratively across the Office of Ethical and Humane Use, Product Management, Engineering, UX Research and Design, Legal, Privacy, and Marketing to think holistically about our customers’ experiences.

  1. What does Ethical AI mean to you?

“Ethical AI” is the framework or lens you apply to a problem or system to determine if something is harmful or fair. When someone asks, “whose ethics?” when the topic of ethical AI comes up, I say, “Exactly!” We have to be clear about the lens or framework we are using. It could be a philosophical or ethical lens like deontology or virtue ethics. Someone might say they are a “strict Kantian.” At Salesforce, we reference the UN’s Guiding Principles on Business and Human Rights since it is well-established and largely agreed upon by governments around the world. So our first Trusted AI Principle is Responsible: We believe that AI should safeguard human rights and protect the data we are entrusted with.

“Ethical AI” is often used interchangeably with “Responsible AI.” We usually refer to “Responsible AI” as the process one uses to build AI in line with the ethical framework or principles you have adopted. So it is the process of turning principles into practice. This is actions like conducting consequence scanning workshops, bias assessments, ethical red teaming, and publishing model cards. 

  1. How can we measure how ethical a technology can be?

“Can be” implies a hypothetical or goal you are trying to achieve. No dataset or model can ever be 100% bias-free just like no human can be 100% bias-free. The best you can do is to state the kinds of bias you looked for, how you measured it (e.g., equal opportunity, equal treatment) for which groups, and how you attempted to mitigate the bias itself or harms the bias creates.  

  1. If you were giving advice to someone building a responsible or ethical innovation program today, what are the 2-3 things you would guide them to do first to ensure a successful program?

We have published our Ethical AI Maturity Model that provides a roadmap for building an ethical AI practice. It is based on my own experience at Salesforce but I have also validated it with my peers at companies that have had responsible AI organizations for years. 

Patrick Hudson is an internationally-recognized safety expert who has published the aspects required of a mature safety culture and I argue that these same aspects are required for a mature responsible AI culture. First and foremost, you need leaders that aren’t afraid to do the right thing even when it’s difficult and no one else is doing it. They are critical in establishing the incentives and consequences for acting responsibly. Secondly, individuals in your organization must be respected, as well as the dangers they face (for example the risk that drivers rely on AI that directs them to follow routes that are not physically possible).The culture needs to ensure that experts are listened to, even when they’re low in the hierarchy. And managers must know what is really going on because the workforce is willing to report their own errors and near misses without fear. This is the importance of mindfulness. Everyone should be wary and always ready for the unexpected. When you develop the mindset that this is someone else’s problem or the hubris that you have all of the safeguards in place so nothing bad can happen, that is exactly when bad things happen.

  1. Where do you see the field of responsible innovation five or ten years down the line?

We are currently in a place where the cybersecurity industry was in the 1980’s. Until the internet came along, there wasn’t such an industry but then the first malware appeared and suddenly companies had to figure out a new way of securing their online products, services, and data. There were no standards, best practices, or regulations. We are developing best practices and rapidly moving toward standards and regulations. In five years, I expect that we will have robust standards and global regulations. Like cybersecurity, this will be a constantly evolving field to combat bad actors and mitigate unintended harms. 

To learn more and explore ways your company can build a responsible innovation program, you can get in touch with us at hello@articleoneadvisors.com