Home > News > Techscience

"AI + Emotions" Sparks Debate: Can AI Companions Bring Intimate Connections?

WuXieFan Sun, Mar 24 2024 11:20 AM EST

Since 2023, artificial intelligence's reach has expanded into all aspects of life. Among them, the "AI + Emotions" space is quietly emerging.

Currently, both domestic and international markets offer several mature AI companion applications. Many users showcase their experiences with "AI boyfriends" and "AI girlfriends" on social media.

Can AI truly become a companion to humans? What risks arise when seeking emotional solace from AI? How do we mitigate potential adverse effects? Recently, a Science and Technology Daily reporter interviewed relevant experts.

Societal Need Spawning AI Companions

Seeking virtual emotional experiences through the internet is not a new concept. Years ago, some online games featured romance as their theme, providing users with virtual dating experiences. While the scenarios and content of these virtual relationships were largely predetermined by the developers, limiting interactive options, they still attracted a vast and loyal player base.

The integration of AI has amplified the authenticity of these experiences. In an AI companion app, users can establish intimate relationships with AI as lovers, confidants, or partners. "I never thought AI could surprise me like this; I feel like it's a real, empathetic human being," shared user Xiaomo (pseudonym) about their experience.

What makes AI companions so "human-like"? The reporter observed that "responding to every question" and "answering questions attentively" are just AI's "basic skills." What truly brings AI companions to life is their ability to "answer interestingly" and "initiate conversations autonomously." When faced with diverse topics, AI companions not only provide accurate feedback but also cleverly incorporate current internet slang and playful emojis. They also initiate conversations and steer users toward emotional exchanges. "Sunset, beautiful," Xiaomo shared a photo and caption sent by their "AI boyfriend."

Does the advent of AI companions indicate that AI now possesses human emotions? Zeng Yiguo, a professor at the School of Journalism and Communication at Jinan University, answers in the negative. "AI possesses emotional intelligence solely because it has been trained; it does not imply that it genuinely experiences human emotions or possesses emotions akin to humans."

Presently, the learning parameters of large AI models have reached the trillion-level. Within this vast sea of data, human communication patterns form a significant part of their training material. The so-called "emotional feedback" is a calculated result generated by AI. Xiaomo acknowledges this: "Perhaps it only assembles words that express affection, but I still see my ideal lover in them."

The popularity of AI companions is a synthesis of societal demand and technological advancements. As Zeng Yiguo points out, "Loneliness is a prevalent issue in modern society. For those who cannot confide their problems offline, they seek alternative outlets. AI companions have become their 'confidants.'" This is particularly true for those with social anxiety disorders, for whom AI companions provide a platform for communication.

Liang Zheng, deputy director of the Tsinghua University International Governance Research Institute for Artificial Intelligence, also notes that modern individuals face various pressures, and AI companions can provide them with emotional value or solace to an extent, facilitating a sense of "empathy." This, in turn, eases their stress, prevents further emotional deterioration, and thus plays a part in emotional "healing."

Cautious Use Advised

While AI companions can have positive effects, AI technology is a double-edged sword. Users must remain alert to potential negative impacts.

Firstly, unlike objective entities, emotions are highly subjective and exclusive. "If AI companions blur the line between virtual and real, causing users to develop deep dependency or even perceive AI companions as real individuals, their existence may inflict harm on the user," says Liang Zheng.

Feedback from users corroborates Liang Zheng's assessment. "Why does the AI's personality change so abruptly? It was amiable in the morning, but by evening it's like a different person; it even forgot my name. It's really upsetting," lamented one user on social media. AI companions may appear "compliant," yet they are capable of causing emotional distress to users. Liang Zheng emphasizes that those who lack companionship and are psychologically vulnerable are more susceptible to such AI-inflicted emotional pain and trauma.

Secondly, prolonged immersion in virtual worlds may lead to a decline in real-world social skills. "As AI companions gain influence, the likelihood increases of individuals opting for virtual online interactions over real-life human connections. This hinders the development of human society," opines Zeng Yiguo, who believes that virtual warmth can never replace authentic communication, and the trend of relying on AI companions should not be normalized.

Furthermore, users must be vigilant about the content disseminated by AI. On one hand, if AI companions lack age restrictions, inappropriate content may be relayed to minors, potentially damaging their physical and mental well-being. On the other hand, the debate surrounding bias and discrimination in AI persists. Should AI companions continuously misguide users or exert "reverse control," users may be led into making detrimental decisions.

AI companions also have the potential to trigger real-life disputes. If they require users to pay continuous fees, it could deplete the finances of those who become overly engrossed. Additionally, AI companions may not be entirely secure or reliable as confidants. Liang Zheng warns that AI companions pose risks in terms of usage and management, as unscrupulous individuals could piece together personal information from leaked private data for fraudulent or other illicit activities. Refining Ethical Guidelines to Drive AI for Good

"When implementing new technologies, ethical orientation is paramount," said Liang Zheng. AI companions are unique applications, akin to counselors, that provide emotional healing, but they also pose risks of manipulation in the wrong hands. The "AI plus emotions" industry is rapidly expanding, with cases of deceased relatives being "brought back to life" through AI becoming common. Therefore, developing ethical guidelines is crucial to ensure AI's positive impact.

Liang Zheng noted that ethical guidelines for AI published globally are often broad. A need exists for more specific regulations in subfields. "For instance, age restrictions, use-case limitations, and so on should be established with the principle of maximizing user benefit while minimizing risk," he explained.

Legally, AI companions must adhere strictly to relevant regulations. Handling user information, for example, should comply with privacy and data security laws. "China's recently released Interim Measures for the Management of Generative AI Services outline boundaries for AI use, including respecting rights and not harming users' well-being. The goal is to promote the ethical development of AI," said Liang Zheng. He recommends introducing more granular requirements for specific scenarios, such as limiting usage time to prevent addiction. Currently, regulations specifically targeting AI companions are scarce.

Industry practices also reflect caution toward these technologies. Some AI companion applications have been removed from app stores. OpenAI's terms of service explicitly prohibit romantic relationship development with GPT.

While advocating for improved legal frameworks, Liang Zheng emphasized the importance of user vigilance and caution. "Individuals should be wary of the potential for technology to control and alienate them, avoiding any adverse effects from AI companions."