Microsoft Becomes the First in Big Tech To Retire This Massive AI Technology. The Science Just Doesn’t Hold Up

Microsoft Becomes the First in Big Tech To Retire This Massive AI Technology. The Science Just Doesn’t Hold Up

Emotional awareness is intuitive to us. Because our survival is dependent on it, we are wired with the ability to sense when others feel angry, sad or disgusted.

Our ancestors had to watch reactions to disgust in order to determine which foods they should avoid. Children observed reactions of anger from their elders to know which group norms should not be broken.

In other words, the decoding of the contextual nuances of these emotional expressions has served us since time immemorial.

Enter: AI.

Presumably, artificial intelligence exists to serve us. To build an AI capable of understanding and detecting human emotions, it is essential that the technology be able to recognize, understand, and create intelligent artificial intelligence.

This was part of the reasoning behind Microsoft and Apple‘s vision when they dove into the topic of AI-powered emotion recognition.

Turns out, it’s not that simple.

Inside Out

Microsoft and Apple’s mistake is two-pronged. The first was the assumption that emotions are categorized as happy, sad, angry, and so forth. These defined emotions are reflected in your external appearances.

This way of thinking in psychology isn’t uncommon. Psychologist Paul Ekman championed these ‘universal basic emotions’. We’ve made a lot of progress since then.

In the words of psychologist Lisa Feldman Barrett, detecting a scowl is not the same as detecting anger. Her approach to emotion falls under psychological constructivism, which basically means that emotions are simply culturally specific ‘flavors’ that we give to physiological experiences.

Depending on the situation, your expressions of joy could be my way to express sadness. My neutral facial expression may be how you express sadness, depending on the context.

So, knowing that facial expressions are not universal, it’s easy to see why emotion-recognition AI was doomed to fail.

It’s Complicated…

Much of the debate around emotion-recognition AI revolves around basic emotions. Sad. Surprised. Disgusted. Fair enough.

But what about the more nuanced ones… the all-too-human, self-conscious emotions like guilt, shame, pride, embarrassment, jealousy? These are the most important experiences that facial expressions can reflect. But these emotional experiences can be so subtle, and so private, that they do not produce a consistent facial manifestation.

What’s more, studies on emotion-recognition AI tend to use very exaggerated “faces” as origin examples to feed into machine-learning algorithms. This is done to “fingerprint” the emotion as strongly as possible for future detection.

But while it’s possible to find an exaggeratedly disgusted face, what does an exaggeratedly jealous face look like?

An Architectural Problem

If tech companies want to figure out emotion-recognition, the current way AI is set up probably won’t cut it.

Put simply, AI works by finding patterns in large sets of data. This means that it’s only as good as the data we put into it. Our data are only as good and accurate as ours. And we’re not always that great, that accurate, that smart… or that emotionally expressive.

The post Microsoft Becomes the First in Big Tech To Retire This Massive AI Technology. The Science Just Doesn’t Hold Up appeared first on Inc..

Loading...