Home > News > Techscience

"Empathetic" AI Emerges: Would You Empathize with It?

ZhangMengRan Thu, Apr 11 2024 10:51 AM EST

66147053e4b03b5da6d0c7ff.jpg Emotional intelligence encompasses the ability to infer intentions and preferences from behavior, making it arguably the most crucial feature of AI interfaces. Image Source: VentureBeat

When we think of emotionally intelligent artificial intelligence (AI), the source material often comes from science fiction, where emotional AIs are portrayed as either lonely seekers of human-like affection or cunning electronic brains.

Now, a startup called "Hume AI," led by a former DeepMind researcher turned CEO, has released what they tout as the "first empathetic conversational AI," named Empathetic Voice Interface (EVI), capable of detecting 53 different user emotions.

This marks a significant technological leap. "Hume AI" has raised $50 million in a Series B funding round, yet along with excitement, there's also apprehension.

Is emotion the watershed for AI?

In 1997, IBM's Deep Blue defeated the world chess champion, shocking humanity with its sheer computational power and brute-force approach, altering the course of AI development. This brute-force method later found its way into machine learning, effortlessly triumphing in seemingly endless possibilities of gameplay.

Today, news of AI challenging humans in various domains might give the impression that computers are on par with humans in cognitive abilities.

Yet, there remains a gap between the two. Machine learning and natural language expert Greg Holland remind us that the human brain can solve problems previously unseen by AI, which is still designed for specific tasks.

In fact, the extent of AI implementation can be accurately categorized into strong AI and weak AI. "Strong" AI considers computers not just as tools but as entities with cognition, perception, and self-awareness.

Meanwhile, "weak" AI sees AI as tools lacking true autonomous consciousness. Mainstream research has predominantly focused on the latter, yielding significant accomplishments.

However, many consider the presence of perception or emotion as the watershed for AI development.

Does AI need to understand emotions?

The next pivotal question for AI is whether it needs to understand and utilize emotions. "Hume AI" answers with a resounding yes. Their aim is to enable emotionally intelligent models to better serve humanity.

They posit that emotional intelligence involves the ability to infer intentions and preferences from behavior, which aligns with the core goal of AI interfaces: discerning what people want and then delivering it. Thus, emotional intelligence could be seen as the most critical function of AI interfaces.

As a chatbot, "Hume AI" distinguishes itself by focusing on understanding human emotions and providing appropriate responses. It not only comprehends text but also employs a voice interface, analyzing human tone, pitch, pauses, and other sound features to deepen understanding.

This comprehension can be incredibly nuanced, encompassing not only "big emotions" like happiness, sadness, anger, and fear but also subtler, multidimensional "micro-emotions" such as admiration, adoration, obsession, sarcasm, shame, among others. "Hume AI" lists a total of 53 different emotions on its website.

It's worth noting that even humans often struggle to perceive emotions like sarcasm, which frequently carry undertones.

Are you afraid of emotionally intelligent AI?

In reality, people fear not just emotionally perceptive AI. In recent years, alongside AI's rapid advancement, voices of reflection and caution have persisted.

Warnings from prominent figures in the industry could amplify these fears. Scientists like Kurzweil, dubbed "Edison's legitimate heir," speculate that by 2045, AI will surpass the human brain, leading to a profound societal transformation into the era of highly intelligent machines. Additionally, figures like "Iron Man" Elon Musk have suggested that AI might soon become powerful enough to dominate the world.

But before reaching that point, ordinary individuals might feel that AI needs something more — perhaps the missing piece is emotional perception.

After the demonstration of "Hume AI," the response was overwhelmingly positive. However, concerns are already emerging online that users might become unhealthily addicted to its "charm" and that it could foster nefarious purposes like manipulation, coercion, and deception.

Human emotions aren't solely positive. When AI attempts to understand or even learn from human emotional behavior, could it be utilized, either actively or passively, to achieve certain goals? For instance, inducing purchases, cultivating bad habits, or inflicting psychological torment.

When lacking ethical boundaries and legal red lines, what might end up being used as a tool could, ironically, be people's own emotions.