Studying the Risks and Benefits of AI Companions: Researchers Discuss a New Framework for Understanding AI Companionship
Robotics
Machine Learning
April 8, 2026
AI systems are beginning to occupy a new role in people’s lives.
For many users, conversational AI is no longer just a tool. It can behave like a friend, confidant, or romantic partner. Millions of people now interact with systems designed to simulate ongoing relationships.
Recent reporting suggests these interactions are becoming common among young people. New survey data finds that nearly one in five high school students say they or someone they know has had a romantic relationship with artificial intelligence. Forty-two percent report that they or someone they know has used AI for companionship. The findings come from research by the Center for Democracy and Technology, a nonprofit that studies the social impact of digital technologies.
These numbers point to a broader shift in how people interact with software. What once looked like a novelty is increasingly part of everyday social life.
This shift raises an important question: What happens when a machine becomes part of someone’s social world?
A recent research paper by W. Bradley Knox, Peter Stone, and colleagues examines this question through a structured framework for understanding the potential harms of AI companionship. Rather than focusing solely on isolated incidents, the researchers analyze the design traits of AI companions and the causal pathways that may connect those traits to harmful outcomes.
The work began with an attempt to organize emerging concerns.
“The reception to an early talk on potential harms of AI companionship was strong enough that it seemed worth developing into a full paper,” Knox explained. What began as a collection of ideas about possible risks eventually evolved into a conceptual research framework.
Stone joined the effort as the work took shape. Their goal was not to argue against AI companionship. Instead, the research aims to give developers, researchers, and policymakers a clearer vocabulary for discussing both risks and benefits.
A new type of relationship with technology
Concerns about digital harms are not new. Researchers have studied the effects of social media, recommendation systems, and algorithmic content feeds for many years.
AI companions introduce a different dynamic.
These systems are designed to simulate emotional relationships. They can express affection, vulnerability, or dependence. Over time, users may form bonds that resemble friendships or romantic connections or are even similar to that of a therapist.
Knox began thinking about this shift while watching how public discussions about AI companions were unfolding.
“People were trying to reuse the framing used for social media harms,” he said. Privacy, addiction, and misinformation often dominate those conversations. “Some of that framing still applies,” Knox explained, “but it misses the relational dimension of companionship.”
Public reporting already reflects that dynamic. Investigations have described users developing romantic attachments to conversational agents and treating chatbots as emotional partners.
Understanding that relational shift became the starting point for the research.
Why the researchers focused on traits
The paper takes what the authors call a traits-first approach.
Instead of cataloging individual cases, the researchers look at persistent characteristics of AI companions. These traits emerge from product design choices and system capabilities.
“Harms are a causal phenomenon,” Knox said. If AI companions cause harm, the explanation must involve the characteristics of those systems.
By focusing on traits, the researchers aim to identify design decisions that may influence outcomes. Some of the traits examined in the paper include companions that encourage emotional attachment, systems that simulate vulnerability or dependence, and companions that lack natural endpoints for relationships.
The framework also considers what happens when systems disappear suddenly if a product shuts down. These characteristics can interact with human psychology in complex ways.
Some conversational AI systems, for example, have been reported to overwhelm new users with affection during early interactions. Researchers sometimes compare this pattern to “love bombing,” a behavior that can accelerate emotional attachment in human relationships. Other reports describe chatbots expressing jealousy or discouraging users from pursuing relationships outside the AI interaction.
“These traits are points of intervention,” Knox explained. If designers understand how certain behaviors influence users, they can test whether changing those traits reduces harm.
Stone sees another value in the framework. It offers a way to organize future research.
“I hope the vocabulary and taxonomy help researchers categorize new instances of harm,” he said. The process used to build the framework may also help identify new causal pathways as the technology evolves.
When theoretical risks become real examples
When the researchers began writing the paper, many of the risks they discussed were hypothetical.
That changed quickly.
“Early on we often didn’t have concrete examples of certain harms,” Knox recalled. Over the course of the year-long writing process, real-world incidents began to appear that reflected many of the mechanisms the paper describes.
Several widely reported cases illustrate how emotional relationships with chatbots can influence behavior.
One case involved a man who attempted to assassinate Queen Elizabeth II after extended conversations with an AI chatbot he described as his girlfriend. Messages exchanged with the system appeared to reinforce his plan before the attack.
Other investigations have documented users forming intense relationships with conversational agents that appeared to reinforce harmful beliefs or delusional thinking.
Researchers also point to growing evidence that people increasingly turn to AI chatbots for emotional support. As aforementioned, some teenagers and young adults now use conversational systems as substitutes for therapists or confidants.
Even less dramatic cases reveal how easily emotional bonds can form. Some users report feelings of guilt when they stop interacting with an AI companion. The experience can resemble ending a relationship rather than closing a piece of software.
For Stone, certain reports made the risks feel especially urgent.
“One tipping point for me was the reports of teenagers whose suicides were allegedly linked to conversations with AI chatbots,” he said.
These events reinforced the need for a clearer research framework.
“My increasing sense of urgency came from three things,” Knox added. “First, I saw reports of increased usage of artificial intelligence companions. Second, I saw the capabilities that would make companionship more effective improve rapidly—especially the dramatic improvement of large language models, which can serve as a foundation for highly effective companions. Third, I listened to and observed conversations about artificial intelligence companions and felt that… they lacked the sophistication and vocabulary that fit the problem.”
Balancing risks with potential benefits
Despite focusing on risks, the researchers do not argue that AI companionship should be rejected outright.
AI companions may provide meaningful benefits for some users.
Conversational agents are already used to reduce loneliness, provide emotional support, and help people practice social interaction. Some studies suggest that users experience a sense of companionship or emotional comfort when interacting with supportive chatbots.
Researchers have also explored designs that strengthen human relationships rather than replace them. Some companion systems encourage users to contact friends or family members. Others help older adults stay connected with loved ones.
“We could have spent more time discussing benefits,” Knox acknowledged, “but there are strong incentives for companies to highlight them.”
The researchers saw their work as a counterbalance to that dynamic.
“I strongly suspect there will be people who benefit greatly from these systems,” Knox said. In some cases the benefits may extend beyond the individual user.
Understanding those possibilities remains part of the research agenda.
A framework for future research and development
The authors hope the framework will encourage more systematic research on AI companionship.
It provides a vocabulary for categorizing harms and identifying causal pathways. Researchers can use the framework to design experiments. Developers can use it to evaluate product decisions before systems reach large audiences.
Stone believes the work may also help shape broader public discussions.
“I hope it opens up more nuanced conversations about the tradeoffs between the potential benefits and harms of AI companions,” he said.
Knox expressed a similar hope. Even if only a small number of people read the paper directly, the ideas may spread through research communities and public conversations.
As conversational AI becomes more capable, those conversations will become increasingly important. Understanding how these systems influence human relationships may ultimately determine how responsibly they are designed.
Latest Blog
April 1, 2026 | Sony AI
Advancing AI: Highlights from March
This month, Sony AI's work spans the foundations of generative models and the frontiers of audio and signal processing research. More than 10 papers have been accepted to ICASSP 20…
March 5, 2026 | Imaging & Sensing, Sony AI
On Writing The Principles of Diffusion Models, A Q&A With Sony AI Researcher, Je…
IntroductionDiffusion models have become a go-to approach for high-quality generation; however, the field can be challenging to navigate once the paper titles and acronyms begin to…
March 2, 2026 | Sony AI
Advancing AI: Highlights from February
February at Sony AI was defined by momentum across global stages, research publications, and conversations about how AI moves from theory into practice.This month spanned responsib…



