Picture1

Businesses Should Think Twice About Developing and Deploying Emotion Recognition

August 23, 2023

Blogs

 

By Ilse Heine

Imagine sitting down for your next job interview where your interviewer is a camera. This is becoming reality for some people as companies race to develop and deploy emotion recognition technologies, a rapidly developing branch of artificial intelligence. These technologies purportedly detect “micro-expressions” in the face and map these expressions to “true feelings.” However, depending on how the technology is used, it can lead to flawed and harmful outcomes. In this blog, we describe what emotion recognition is, its risks to human rights, and how—or even if—the associated risks can be mitigated.

What is emotion recognition

According to the AI Now Institute, a research institute focused on AI, emotion or affect recognition “aims to interpret faces to automatically detect inner emotional states or even hidden intentions.” The technology aims to analyze hundreds of thousands of images of faces, detecting “micro-expressions,” and mapping these expressions to “true feelings.” The field dates to at least 1995, when MIT Media Lab professor Rosalind Picard published, “Affective Computing.”

In short, emotion recognition leverages machine learning, deep learning, computer vision, and other technologies to recognize emotions based on object and motion detection. Emotion recognition may not only analyze facial expressions, but also speech, text, physiological signals, and behavioral patterns.

At a high level, a paper published by the European Data Protection Supervisor (EDPS) identifies the steps that facial emotion recognition goes through as follows: a) face detection, b) facial expression detection and c) expression classification to an emotional state. Depending on the algorithm, facial expressions can be classified to basic emotions (e.g., anger, disgust, fear, joy, sadness) or compound emotions (e.g., happily said, sadly angry). In other cases, facial expressions could be linked to a physiological or mental state of mind (e.g., tiredness or excitement).

Current use and proposed regulation

The European Parliament recently advanced a piece of legislation, called the Artificial Intelligence (AI) Act, which will have huge consequences for the legal landscape around artificial intelligence. The Act, which is expected to be finalized by the end of this year, classifies the higher risk use cases and establishes management mechanisms associated with particular uses of AI that pose a threat to “fundamental rights,” recognized by the Union. One use of AI that the European Parliament’s draft text has recommended to be banned completely is emotion recognition, a rapidly growing branch of artificial intelligence. However, it’s ban is far from certain.  The text from the Council of the European Union and the European Commission (the two other inputs to the ‘trilogue’) does not prohibit emotion recognition, suggesting that the ban is not guaranteed.

The science behind emotion recognition systems is controversial. Its validity has been heavily scrutinized and raises several ethical concerns. Most recently, 25 rights groups sent a joint letter to Zoom CEO, Eric Yuan, urging the company to halt further research into emotion-based AI. In response to public criticism, some companies are pulling back. For instance, Microsoft no longer provides general use of an AI-based cloud software used to infer people’s emotions but retains the capability in an app used by people with vision loss. Additionally, Hirevue, a video interview and assessment vendor, removed the facial analysis component from its applicant screening assessment.

Nevertheless, the market size for this technology continues to grow, and according to one analysis, “is increasingly becoming part of the core infrastructure of many platforms.”  Today, the market for emotion-detection technology is worth roughly $21.6 bn and its value is predicted to more than double by 2024. A range of industries are already using this technology, including advertising, marketing, retail, education, police, employment, and insurance.

Risks of emotion recognition

One of the biggest risks with emotion recognition is bias and discrimination. For instance, one study found that emotion analysis technology assigns more negative emotions to people of certain ethnicities than others. This could have significant ramifications, if for instance, an algorithm identifies an individual as exhibiting negative emotions with implications for their career or for access to essential services or support, such as healthcare or insurance. For instance, an insurance company may use emotion recognition technology to detect signs of neurological disorders to deny coverage for such pre-existing conditions.

The technology can also cause harm to those who are neurodivergent, people who are hearing impaired, and other groups who have been stereotyped according to emotions.

Aside from inherent bias, the technology also has impacts on people’s right to privacy and other human rights, such as the right to personal liberty and security, depending on how the technology is used. For instance, emotion recognition-enabled cameras have been installed in Xinjiang, China, where an estimated 1m mostly Uyghur Muslims are being held in detention camps. According to FT reporting, the technology is being deployed in Xinjiang at customs to “rapidly identify criminal suspects by analyzing their mental state” and BBC reporting claims that such systems have been installed in police stations. In the United Arab Emirates, cameras are being used to detect people’s facial expressions, and understand the general mood of the population – a project initiated by the country’s Ministry of Happiness. In addition to use cases by government and law enforcement, private companies are also using the technology, for instance, to detect the emotions of prospective buyers or screen prospective candidates for jobs.

Risk mitigation

While it is possible with some innovations and technologies to mitigate risk, critics of emotion recognition argue that the technology itself is based on unscientific assumptions, and as such, is not reliable or trustworthy. Specifically, while it may be able to decode facial expressions, that doesn’t necessarily translate into what a person is thinking or what they plan to do next. The way people express themselves is highly nuanced and contextual and varies across cultures. A smile in one culture or country can have one meaning and another elsewhere. A study by the Association for Psychological Science spent two years reviewing 1,000 papers on emotion detection, concluding that it’s very difficult to accurately tell how someone is feeling based on facial expressions alone.

Europe is leading the way in attempting to ensure that emotion recognition technologies cannot cause harm in key areas. However, experience shows that there will continue to be demand for this technology. It’s therefore critical for business to assess whether the development and/or use of this technology is causing more harm than good. Indeed, a company’s AI or RI principles are a key framework on which to base this decision.

If you have questions or would like to learn more about how your organization can build out AI and RI principles, such as participation in our Business Roundtable on Human Rights at AI, you can reach us at hello@articleoneadvisors.com.