The trouble with emotion-reading AI

“If you can’t measure it, you can’t fix it.” 

That’s a common saying in business, and it tends to be true. But what if the thing you want to fix is your employees’ attitudes? 

The AI revolution makes it possible to measure emotions and mental states. So why not use it widely and fix what’s broken? 

That’s the idea behind emotion AI, which is also called “affective computing,” “sentiment analysis,” or “algorithmic affect management.” The idea is to use sensors and AI to detect, interpret, classify, and act upon human emotions in the workplace. 

Thanks to improvements and breakthroughs in a wide range of technologies (including computer vision, natural language processing, speech and voice analysis, biometrics, machine learning and deep learning, and edge computing hardware) emotion AI is now possible. 

Many companies have come forward to provide ready-to-use solutions for emotional AI apps, including Cogito, Affectiva, Hume AI, Entropik, and HireVue.

The idea is simple: Collect data from employees, process it through AI, and get a result that shows how an employee feels. Depending on the solution, the data comes from: 

  • Vocal features — pitch, tone, cadence, micro-pauses, vocal stress
  • Facial expression — video analysis of video calls and through desktop cameras
  • Text — mass sentiment analysis on emails, Slack/Teams messages, survey responses, and performance reviews
  • Physiological biosignals — heart rate variability, galvanic skin response (via wearables)
  • Behavioral telemetry — keystroke cadence, mouse dynamics, app-switching patterns
  • Posture and gaze — computer vision analysis from cameras installed in workplaces

Despite the progress and variety of solutions, this whole area is problematic for businesses. 

Why companies want to use emotion AI

The range of business goals driving emotion AI is vast. The most defensible reason is safety. Workers in risky jobs, such as factory workers and truck drivers, could be protected with AI tools that help avoid injury and death. A common example is technology that detects when a truck driver is dozing off and either sounds an alarm or switches to autopilot to take control of the truck and pull over. 

Another goal is better customer service. Companies like MetLife use software that monitors call center agents’ voice, tone, and pitch to make sure they don’t get snippy or express frustration with customers. 

HR departments could use AI to understand the workplace mood by analyzing company communications and employee surveys. Companies can also check for employee burnout and use the technology for hiring. By applying emotion AI to a video job interview, companies might make better hires. 

Emotion AI in the workplace can offer other benefits such as lowering employee turnover, healthcare expenses, and safety risks while boosting customer satisfaction, worker productivity, and insight into team or managerial dysfunction.

What’s wrong with emotion AI

While measuring, then acting upon, the emotions and mental states of employees sounds like a powerful idea, it’s often based on bad science. 

Emotion AI systems that lean on facial expressions, for example, are based on a theory by Paul Ekman, an American psychologist at the University of California, San Francisco. He theorized back in the late 1960s that a small set of basic human emotions produces universal, reliably readable facial expressions across cultures

But Ekman’s theory was shown to be problematic by a 2019 meta-analysis led by Lisa Feldman Barrett, in an article published in Psychological Science in the Public Interest. She looked at more than 1,000 studies and concluded that you can’t always reliably infer people’s emotional states from facial movements alone. 

Most emotion AI solutions are based on the assumption that everyone’s emotions can be interpreted the same way, and that’s almost certainly wrong, given how different people can be in appearance, voice, personality and physiology. 

Like many areas of business and leadership in recent years, AI is often seen as a solution to the challenges of managing a lot of employees. 

Emotion AI holds out the promise that leaders can bypass the need to inspire, motivate and educate employees so that their actions are aligned with company goals, and instead try to achieve this alignment through hyper-surveillance. 

But that’s unfair, say some emotion AI supporters. Many organizations use emotion AI systems claiming to help employees in some way. Research suggests that this might backfire. 

A 2024 Finnish case study found that workplace emotion-tracking technology tends to undermine wellbeing more than support it and has a bunch of problems. First, the technology often fails to work. Specifically, it claims to identify mental states like “stressed” or “engaged,” which turn out not to faithfully reveal actual internal moods. 

Second, the quality of emotional AI output often varies by race. The study found that the faces of black people were wrongly labeled as “angry” or “contemptuous” more often, even when showing the same facial expressions as white participants. That’s just one example of bias that might come from treating employees differently based on an AI’s flawed ability to interpret human emotional expression. 

Third, they found that claims of “anonymous aggregation” turn out to be false in practice with smaller teams. The data can unintentionally reveal identities, leading to privacy violations. 

Fourth, emotion AI may have the practical effect of requiring “emotional labor,” which means mustering up and conveying the right emotions as part of the job, on an ever-growing range of professions. 

And finally, emotion AI is prone to mission creep. Companies often deploy it for one purpose then drift toward increasing worker surveillance. 

Emotion AI may have no future

While emotion AI is growing in some sectors of the economy, it’s being forcibly shrunk through growing regulatory action. The European Union last year banned emotion AI in the workplace and in educational settings, with narrow exceptions for medical or safety reasons. Multinational corporations are gravitating to the European standard. 

There’s even been limited legal or regulatory action against the technologyin a few states, including California, New York, and Illinois.

Some companies have voluntarily rejected emotion AI. Microsoft, for example, announced in June 2022 that it would retire the Azure Face API’s emotion-recognition capabilities (along with inference of gender, age, smile, facial hair, hair, and makeup) as part of an overhaul of its Responsible AI Standard. 

The company’s Chief Responsible AI Officer, Natasha Crampton, explained the change by citing “the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.” Microsoft also worried that such technology “can subject people to stereotyping, discrimination, or unfair denial of services.”

So while there are real and helpful uses for emotion AI in some cases, the science behind it is weak, the results are often misleading, employees generally dislike it and find it stressful, bias is likely built in, privacy violations are likely — and it might not even be legal internationally or even across all American states. 

Tempting as it is, emotion AI is too problematic to deploy. 

AI disclosures: I don’t use AI for writing. The words you see here are mine. I used a few AI tools via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search as one part of my fact-checking for this column. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.

Read more: The trouble with emotion-reading AI

Story added 15. May 2026, content source with full text you can find at link above.