Artificial Intelligence

Emotion AI: The Next Frontier or a Flawed Experiment?

By Business OutstandersPUBLISHED: September 3, 23:28UPDATED: September 4, 0:24
Emotion AI

As businesses continue to embed AI in every aspect of their operations, an unexpected trend has emerged: companies are increasingly turning to AI to help their bots better understand human emotions. This burgeoning field, known as “emotion AI,” is gaining traction as a way to make AI more effective in the workplace.

The logic behind emotion AI is straightforward: If companies are deploying AI assistants to support executives, placing AI chatbots on the front lines of sales and customer service, and using AI-driven tools to streamline everyday tasks, these bots need to understand the difference between an irritated “What do you mean by that?” and a perplexed “What do you mean by that?”

Emotion AI aims to be a more sophisticated evolution of sentiment analysis, an earlier technology that tried to extract human emotions from text, often on social media platforms. However, emotion AI goes beyond this by being "multimodal" — it uses a combination of visual, audio, and other inputs, along with machine learning and psychological principles, to detect human emotions during interactions.

Many major cloud providers are already offering emotion AI capabilities. For example, Microsoft’s Azure Cognitive Services includes an Emotion API, while Amazon Web Services has its Rekognition service, which has sparked debate over privacy and ethical concerns in the past.

While emotion AI has existed for some time, the recent surge of AI bots in business environments has renewed interest in its potential applications. With the growing use of AI assistants and fully automated human-machine interactions, emotion AI could enable more human-like communication, enhancing the effectiveness of these digital helpers.

Cameras and microphones are central to how emotion AI operates, whether built into laptops and smartphones or installed separately in physical spaces. In addition, wearables like smartwatches may soon play a role in emotion AI, gathering data from users to help bots interpret human feelings more accurately.

The rise of emotion AI has spurred the creation of several startups. Companies like Uniphore, which raised $610 million in total funding, and others like MorphCast, Voicesense, Superceed, Siena AI, audEERING, and Opsis are all betting on this technology to make AI more empathetic.

However, there's a fair amount of skepticism surrounding emotion AI. The concept of using technology to fix a problem created by other technologies feels like a very Silicon Valley approach. The last time emotion AI gained serious attention, around 2019, researchers poured cold water on the idea. A meta-analysis conducted by a team of researchers that year concluded that human emotion cannot reliably be determined by facial expressions alone. In other words, the assumption that AI can detect human emotions by mimicking human methods — reading faces, body language, and vocal tones — may be fundamentally flawed.

Regulatory challenges also loom over the future of emotion AI. For example, the European Union's AI Act bans the use of computer vision for emotion detection in specific contexts, like education. Meanwhile, state laws in the U.S., such as Illinois' Biometric Information Privacy Act (BIPA), prohibit the collection of biometric data without explicit consent.

All of this paints a complex picture of a future where AI is omnipresent in our work lives. These AI bots might attempt to interpret emotions to perform tasks like customer service, sales, and HR, but they might not excel in roles that genuinely require emotional intelligence. We could end up with bots that operate at the level of Siri circa 2023 — or worse, a mandatory AI presence trying to guess everyone’s feelings during meetings in real time. It's hard to say which scenario might be more troubling.