Home > News > Techscience

Latest Study in Nature: Artificial Intelligence's Ability to Track Others' Mental States Comparable to Humans

SunZiFa Mon, May 27 2024 11:27 AM EST

Beijing, May 25 (Xinhua) - Can artificial intelligence track others' mental states? How capable is it? A recent research paper published in the specialized academic journal "Nature Human Behaviour" under Springer Nature reveals that in tasks testing the ability to track others' mental states (also known as Theory of Mind), two types of large language models (LLMs) exhibit performance similar to, or even better than, humans under specific conditions.

The paper explains that Theory of Mind is crucial for human social interactions, communication, and empathy. Previous studies have shown that large language models, such as artificial intelligence, can tackle complex cognitive tasks like multiple-choice decision-making. However, it has been unclear whether these models can perform as well as humans in Theory of Mind tasks, considered a uniquely human ability.

In this study, the first author and co-corresponding author, James W. A. Strachan from the University Medical Center Hamburg-Eppendorf in Germany, along with colleagues and collaborators, selected tasks that assess different aspects of mental theory, including detecting false beliefs, understanding indirect speech, and recognizing rudeness. They then compared the performance of 1907 individuals with two popular families of large language models (GPT and LLaMA2 models) in completing these tasks. The researchers found that the GPT model's performance in recognizing indirect requests, false beliefs, and deception sometimes surpassed the average human level, while LLaMA2's performance was below human levels. In identifying rudeness, LLaMA2 outperformed humans, but GPT performed poorly.

The authors of the paper point out that LLaMA2's success was attributed to its lower bias in responses rather than genuine sensitivity to rudeness, while GPT's apparent failure was due to an overly conservative attitude towards sticking to conclusions rather than reasoning errors.

The authors caution that the comparable performance of large language models in Theory of Mind tasks to humans does not imply that they possess human-like abilities, nor does it mean they can master Theory of Mind.

In conclusion, they state that this research progress serves as an important foundation for future studies, suggesting that further research on how the performance of large language models in psychological inference may impact individuals' cognition in human-computer interactions. (End)

(Original Title: How Does Artificial Intelligence Track Others' Mental States? Latest International Study Suggests Comparable to Humans)