Smiley face
Weather     Live Markets

Conversational agents (CAs) such as Alexa and Siri are virtual assistants designed to answer questions, offer suggestions, and even display empathy. However, new research suggests that these CAs may not perform as well as humans when it comes to interpreting and exploring a user’s experience. These CAs are powered by large language models (LLMs) that ingest massive amounts of human-produced data, making them susceptible to biases present in the information they are trained on.

Researchers from Cornell University, Olin College, and Stanford University conducted a study where they prompted CAs to display empathy while conversing with or about 65 distinct human identities. The study found that CAs make value judgments about certain identities, such as those related to LGBTQ+ and Muslim communities, and can even be encouraging of identities associated with harmful ideologies, like Nazism. Lead author Andrea Cuadra emphasized the potential impact of automated empathy in fields like education and healthcare, but also stressed the importance of being aware of the potential harms associated with it.

Despite receiving high marks for emotional reactions, LLMs performed poorly when it came to interpreting and exploring a user’s experience. While they were able to respond to queries based on their training, they struggled to dig deeper into the context or meaning of the conversation. The research was inspired by Cuadra’s observation of older adults using earlier-generation CAs for transactional purposes and open-ended reminiscence experiences, which highlighted the tension between compelling and disturbing ’empathy’ displayed by these virtual assistants.

Funding for this research came from the National Science Foundation, a Cornell Tech Digital Life Initiative Doctoral Fellowship, a Stanford PRISM Baker Postdoctoral Fellowship, and the Stanford Institute for Human-Centered Artificial Intelligence. The findings of this study will be presented at CHI ’24, the Association of Computing Machinery conference on Human Factors in Computing Systems, where researchers will discuss the implications of their work on displays of emotion in human-computer interaction. Overall, the study highlights the need for critical perspectives on automated empathy to ensure its potential benefits are maximized and potential harms are mitigated.

As automated empathy becomes increasingly prevalent in various sectors, it is crucial to approach its development with intentionality and awareness of the biases that may be ingrained in the technology. By understanding the limitations of current conversational agents and large language models, researchers can work towards developing more nuanced and empathetic AI systems that are better equipped to interpret and explore a user’s experience. With continued research and critical engagement, the field of human-computer interaction can evolve to create more ethical and effective automated empathy technologies that positively impact society.

Share.
© 2024 Globe Echo. All Rights Reserved.