Emotion AI: Can Machines Feel Emotions? No, But They Can Recognize Ours

Emotion AI: Can Machines Feel Emotions? No, But They Can Recognize Ours

Beni Gradwohl, co-founder and CEO of Cognovi Labs, joins host Dara Tarkowski to discuss emotional artificial intelligence (AI), also known as “affective computing.”  

  • Emotion AI (also known as affective computing or artificial emotional intelligence) is a branch of artificial intelligence that measures and learns to understand humans’ emotions, then simulates and reacts to them.
  • Cognovi Labs CEO Beni Gradwohl is developing a psychology-driven artificial intelligence (AI) platform that helps clients in the commercial, health and public sectors gain insights into their customers’ or audiences’ emotions in order to predict their decisions. This understanding also helps clients better communicate with their constituents.
  • Beni joins me to discuss his unconventional career journey, Cognovi’s tech and why, in the wake of a global pandemic, Emotion AI is more relevant than ever. 

We humans are social animals. We’re born with neurons that help us recognize facial expressions, voice inflections and body language, as well as the ability to change our interactions with others accordingly. Most of us refine those skills and add new ones as we grow. 

We’re literally wired to read emotions.

But in our era of rapid change, how can we do that at scale and in real time?  

Ben-Ami (“Beni”) Gradwohl, co-founder and CEO of Dayton, Ohio-based startup Cognovi Labs, is working to train machines to measure and understand humans’ emotional responses. Launched in 2016, Cognovi is at the forefront of innovation in the artificial emotional intelligence (AI) space. The company’s psychology-driven AI platform helps clients in the commercial, health and public sectors gain insights into how their customers or audiences feel, predict their decisions and communicate in ways that complement those emotions.

“At least 50 years of research in psychology, neurology and behavioral sciences have shown that we are not as rational as we think we are,” says Beni. “In fact, the vast majority of decisions we make are made by the subconscious mind, based on emotions.”

Although Emotion AI is in its infancy, it’s more relevant than ever — and if AI can help us understand human emotional responses, can it be used to influence people for the greater good?

On an episode of Tech on Reg, I spoke to Beni about his career path, Cognovi’s tech and why emotional intelligence (EQ) is the future of AI. 

From academia to AI 

When Beni was growing up, AI was purely science fiction. In fact, his original career path was closer to “Cosmos” than “Battlestar Galactica.” A trained astrophysicist, he spent a few years in academia before pivoting to finance for two decades, first at Morgan Stanley and then at Citi.

In the late ‘90s, he took a course at Harvard in behavioral economics and behavioral finance, which were still relatively new concepts in the business world. That was the beginning of a journey that ultimately led him to launch Cognovi Labs. 

“I came from this quantitative work where everything had to do with data, but this class was an eye-opener,” Beni recalls. “I said, my gosh — the world doesn’t revolve around hard data. It’s actually around how people make decisions.”

But by the time he joined Citi during the economic crisis of 2008 — as part of a senior management team tasked with stabilizing the bank’s mortgage portfolio — he recognized the urgent need for business “to systematically understand how we make decisions, so we can help society in a better way.”

The new EQ 

The company’s name is a portmanteau of cognitive and novus (the Latin word for “new”), though the field of artificial emotional intelligence dates back to about 1997, when MIT Media Lab professor Rosalind Picard published “Affective Computing” and kicked off an entirely new branch of computer science.

In an article about Emotion AI on the MIT Sloan School of Business website, writer Meredith Sloan asks:

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

Beni points out that Emotion AI “uses machine learning to replicate what we do as human beings day in and day out, which is to understand people’s emotions.” 

Paradoxically, most people feel uncomfortable talking about or sharing their feelings, he notes. “Some people can’t even admit their feelings to themselves.”

But mental health “came into such sharp focus during the pandemic, because so many people were struggling so much for so many different reasons … feeling isolated, scared, sick. Everything was in flux,” he adds.  

Understanding emotions to analyze motivations

More than ever, we know that emotional wellness is part of overall health, and that (on a personal level) we should strive to understand and manage our emotions. At work, Beni says that we need both IQ (to analyze and problem solve) and EQ (emotional intelligence, to understand the social and emotional cues of others). And because 90% of decisions are made by the subconscious mind based on emotions, understanding emotions is vital. 

“If it’s important, let’s measure it,” says Beni. “And let’s just measure it in a way that also [ allows us ] to create value.”

Not all of us have a high EQ. Some people are incapable of recognizing emotions — or simply less perceptive of them — due to neurodivergence. Even highly emotionally intelligent people may not fully understand the breadth of human emotion, or they may misread the emotional motivation of another person. And although most of us can tell people are angry when they yell, or sad when they cry, it’s a lot more difficult to read an article (and get others to agree on) the writer’s tone or mood.

“You can extract emotions with visuals …  [ and ] audio, like if somebody shouts or slows down or pauses. And you can do it through sensors [ that measure ] heart rates and whether people are sweating,” says Beni.

Text is a bit more complicated. Social media posts, discussion forums, emails, transcriptions of meetings or phone calls — they’re all data that (via Cognovi’s proprietary IP) are segmented and analyzed in order to extract and characterize the emotions of the people writing or talking.

Inside the learning machine

When analyzing a given text, Cognovi’s AI first identifies the topic at hand: Is the conversation about “buying Nike sneakers, or about politics, or about the war in Ukraine?” Beni asks. 

Next, the AI extracts the underlying emotional undertone of the text and sorts it into one of 10 emotions: joy, anger, disgust, fear, sadness, surprise, amusement, trust, contempt and control. 

Then, it quantifies how emotions drive the tendency or impulse to act in certain ways, if people act at all (“if they’re not [ feeling ] emotions, they’re not going to do anything,” says Beni). The output depends entirely on the data the client provides. Some clients provide text from social media posts, discussion forums, blogs and other publicly available information. Others want to use surveys they create (or ask Cognovi to help them create surveys), which offer “rich information” that helps clients understand why their audience members behave the way they do. 

Unblocking the blockers

One such client was a pharmaceutical company looking for ways to better market a highly effective, but under-prescribed drug to doctors. Even though the company analyzed its own data to segment doctors into groups, it still couldn’t figure out why some doctors in a certain state didn’t prescribe the drug to their patients. 

“Similarly to lawyers, we always think that doctors are completely rational,” Beni explains. “There is research showing that even in clinical decisions, doctors are highly emotional.” 

The company needed “to figure out the emotional blockers and the emotional drivers,” he adds. “Because there were clearly no rational reasons not to give patients that medication. It was not related to cost or reimbursement or to side effects. There was something else happening.”

So the Cognovi team (which includes a medical doctor) created a custom survey it called the “diagnostic interview,” a 10-question questionnaire designed to broach issues related to the condition the drug treats — in a way that generated strong emotional responses from prescribers. 

The resulting data revealed a particular emotional inhibitor that the client immediately recognized, telling Beni they had known for 10 years that this particular “blocker” could be an issue. Once they knew for sure, they could face it head-on and talk frankly about it to doctors. 

Future interest

Blame Hollywood: Thanks to movies and TV about robots gone horribly wrong, many people tend to think of AI as menacing or worrisome at best. As a longtime educator, Beni has noticed that his students have become more interested in the philosophical, moral and ethical issues around AI than the technical ones. 

But Emotion AI aims to “augment something we should be doing much better than we are,” says Beni. “If we are more emotionally intelligent, the world I think [ will experience ] less crime, I think there will be less war. … Any technology, any capability [ we have ], we should do it.” 

However, he feels strongly that we can’t continue to innovate without any governance. Because AI represents an entirely new set of challenges, we have to rethink regulations and oversight — as well as our approaches to privacy and security. 

Now, he thinks many organizations try to “understand their people better; to do right by their customers and their employees,” because everyone struggles sometimes. 

“Maybe what is happening at Cognovi can help organizations to make a difference.”

Beni knows one thing for sure: “How we use AI, how we regulate AI, and how we do it for the better will change how our kids are going to grow up. So get involved. That’s my suggestion to everyone: whether you’re a tech person, or a philosopher, a lawyer or a social scientist, there’s a role to be played — for you to shape the future.”

This is based on an episode of Tech on Reg, a podcast that explores all things at the intersection of law, technology and highly regulated industries. Be sure to subscribe for future episodes.