fbpx
Saturday, July 27, 2024
Saturday July 27, 2024
Saturday July 27, 2024

AI simulations of dead people could cause ‘digital hauntings,’ Cambridge ethicists warn

PUBLISHED ON

|

Cambridge University study highlights psychological risks of “deadbots,” urging strict protocols to safeguard survivors from potential harm

Researchers at Cambridge University have cautioned that artificial intelligence simulations of deceased individuals, often called “deadbots,” could lead to unintended and emotionally harmful “digital hauntings” for surviving relatives and friends. The study, conducted at the Leverhulme Centre for the Future of Intelligence, emphasizes that companies providing these services must implement protocols that consider the emotional well-being of those left behind.

Companies now offer chatbots that replicate the personalities and mannerisms of deceased loved ones. By analyzing text messages, emails, and social media posts, these chatbots emulate the unique linguistic traits and behaviours of the departed. While some users seek solace in these “digital afterlife” services, Cambridge researchers warn that psychological risks may outweigh the benefits. 

Embed from Getty Images

Dr. Tomasz Hollanek, a co-author of the study, stressed the need for digital afterlife services to prioritize the rights and consent of those engaging with these simulated personalities. He stated, “These services risk causing immense distress if people face unwanted digital hauntings from unsettlingly accurate AI recreations of their lost loved ones. This psychological toll, especially during the grieving period, can be devastating.”

The researchers detailed the potential ethical concerns in the journal *Philosophy and Technology* under the title “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry.” They highlighted how deceased avatars could be exploited by companies to send advertising spam to grieving family and friends using the likeness of the deceased, leading to a distressing scenario akin to being “stalked by the dead.”

Dr. Katarzyna Nowaczyk-Basinska, another co-author, echoed these concerns. “Nearly anyone with internet access and basic skills can revive a deceased loved one through generative AI,” she said. “However, it’s critical to protect the dignity of the deceased from financial exploitation by digital afterlife services. We must also safeguard those who interact with these simulations.”

The study’s recommendations include establishing protocols for safely terminating “deadbots” to prevent digital overreach and ensuring transparency in data collection and usage. The researchers also stressed that proper safeguards must strike a delicate balance between respecting the wishes of the deceased and the psychological well-being of those left behind.

Their findings resonate with the themes explored in the *Black Mirror* episode “Be Right Back,” where a woman uses AI to communicate with her deceased partner. In reality, AI chatbot users already seek comfort in these simulations. A man in Canada attempted to recreate conversations with his deceased fiancée using an AI tool called Project December, claiming it closely mimicked her personality. “Intellectually, I know it’s not really Jessica,” said Joshua Barbeau, “but your emotions are not an intellectual thing.”

Analysis:

The rise of “deadbots” introduces complex ethical and psychological dilemmas that society must address. From multiple perspectives, these issues highlight both technological possibilities and the potential pitfalls.

Sociological Perspective:

This trend reveals society’s growing willingness to engage with AI in deeply personal matters like grief. It shows how technology can reshape traditional grieving processes, where individuals may now seek solace in a digital reconstruction of their loved ones. However, this phenomenon can prolong grief for some individuals, making it harder to accept loss.

Psychological Perspective:

From a mental health standpoint, interacting with AI-based recreations could disrupt the natural grieving process. By holding onto digital remnants of their loved ones, individuals may struggle to fully accept the loss, delaying closure. There’s also the risk of becoming overly reliant on simulated conversations, further complicating emotional healing.

Ethical Perspective:

Ethicists worry that AI-based “deadbots” blur the lines between appropriate mourning and exploitation. These services require intimate personal data to simulate personalities, raising concerns about consent and dignity. The study suggests that companies could use this data to target grieving families with ads or unsolicited messages, violating privacy and trust.

Economic Perspective:

Digital afterlife services also present potential commercial incentives that might conflict with users’ emotional needs. The commercialization of grief through subscription models or data collection for targeted marketing raises questions about how profit motives can overshadow empathy.

Legal Perspective:

This trend could push for new legislation to regulate digital afterlife services, emphasizing informed consent for data use. Lawmakers may consider mandating protocols that outline the right to delete or modify digital remains, preventing misuse or unwanted exploitation.

While “deadbots” offer emotional solace for some, the potential for psychological distress and ethical conflicts demands cautious implementation and strong regulations. The study from Cambridge underscores that companies must carefully consider these implications to protect individuals navigating grief and loss

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles