Beyond Black Mirror: The Real Experiment of Giving AI Emotions

11/5/25 I/O Magazine

Household robots and AI assistants illustrate how personality can make technology more approachable, while also amplifying ethical dilemmas.

Star Wars's R2D2 doing a little dance
Beyond Black Mirror: The Real Experiment of Giving AI Emotions

Content Warning: This material contains mention of suicide. If you or someone you know needs support, help is available by calling or texting the Suicide and Crisis Lifeline at 988.

Almost every famous Hollywood robot or artificial intelligence system has some sort of personality. The most humanlike colleagues, such as Bishop in Aliens or Rosie from the Jetsons, are fully upright, talking personas complete with a wide range of emotions and interactions. On the other side of the body spectrum, even squat, appendage-less depictions without mouths like R2D2 easily convey senses of happiness and sadness.

With the advent of large language models (LLMs) like ChatGPT, Claude, and Gemini, computational coworkers that display a range of emotions and personalities are no longer relegated to science fiction. While stories about people falling in love with or becoming emotionally reliant on their chatbot are still novelties in the mainstream media, society is likely heading toward a future where they are commonplace—and quickly.

With such a rapid rise of a prominent and powerful technology, there are bound to be a slew of accompanying unanswered questions. How does an AI get a personality? Is it intentionally coded into the program, or does it arise naturally? What are the ethical and legal ramifications of allowing AI systems to evolve unchecked and unregulated? Is this mimicry of what makes humans human even beneficial?

Boyuan Chen, assistant professor of mechanical engineering and materials science at Duke University, believes that it very well can be.

Chen envisions a future where AI systems are as commonplace as electricity. Household robots learn your family’s routine and how best to support your day-to-day operations while providing lighthearted pep talks. Digital personal assistants connected to every smartphone, computer, car, and other high-powered processor in your life keep you on task and on schedule. Even your refrigerator is friendly as it automatically orders a new gallon of milk.

boyuan chen

If you want AI to help you, you want to make sure it can understand your desire, your intention, your emotions

Boyuan Chen Assistant Professor of Mechanical Engineering and Materials Science

As this teaming of humans with robots or AI systems becomes more common, Chen wants to make sure they operate at as high of a level as possible. And part of that goal is making sure that these interactions are just as smooth as human-to-human relationships, if not even smoother.

“If you want AI to help you, you want to make sure it can understand your desire, your intention, your emotions,” Chen said.

Maybe one person wants an AI assistant that is always cheery and encouraging no matter what is being asked of it. Maybe another person would prefer a more cheeky and challenging experience. The range of preferences is as wide as humanity is diverse, so to be able to personalize its behavior, an AI must learn everything it possibly can about us.

a short moving gif of R2D2 shaking back and forth
Even though Star Wars’s R2D2 doesn’t speak English, the robot clearly has a lot of personality, much to the chagrin of its companions. As real-life AI systems advance, the question becomes: Does personality help us work with robots and AI?

To better understand how this level of fine-tuning is even possible, it helps to know the basics of how LLMs work. At their core, these programs are simply trying to predict what word should come next in a sentence in real-time based on an enormous dataset of examples. As these systems have become very good at these simpler tasks, their creators have taught them how to understand what a request is and then try to satisfy it through writing very good language.

“But the fundamental idea is just to make a prediction of what the next word should be,” said David Carlson, the Yoh Family Associate Professor of Civil and Environmental Engineering at Duke University. “And since these lines are trained on the whole entirety of what’s out there on the internet, it’s probably not surprising that it’s going to learn from, for example, sarcastic responses and learn how to give sarcastic responses itself.”

Small examples of LLMs working on a basic but personal level are already all around us. Your smartphone learns how you interact with others and tries to save you time with predictive text. Your car learns where you’re most likely going on certain days at certain times and provides updated traffic warnings. Netflix learns what types of entertainment you prefer and makes recommendations on what to watch next.

david carlson

Since these lines are trained on the whole entirety of what’s out there on the internet, it’s probably not surprising that it’s going to learn from, for example, sarcastic responses and learn how to give sarcastic responses itself.

David Carlson Yoh Family Associate Professor of Civil and Environmental Engineering

On a fundamental level, however, these systems don’t know whether you are enjoying yourself or not. It only knows how much you are engaging with its suggestions in an effort to increase your usage. To do that, it must learn as much about you and your preferences as computationally possible.

And that opens a whole other can of worms.

Take, for example, Amazon Astro—a robot designed for home security monitoring, remote care of elderly relatives, and as a virtual assistant that can follow a person from room to room. These types of AI-powered robots are still rare in the United States, but they are quickly gaining traction in other countries such as Japan.

a screenshot of a Wall Street Journal article about a lawsuit against ChatGPT alleging their AI helped a 16-year-old commit suicide
A growing number of people are using general AI chatbots like ChatGPT as therapists without understanding the risks or privacy implications. What other issues could giving AI human-like personalities surface?

“They are conversational like a chatbot, and they have cameras, and they capture everything that is happening in your household to interact with you,” said Pardis Emami-Naeini, professor of computer science at Duke University. “And that leads to all kinds of privacy and security concerns that we have not yet fully had to grapple with.”

It’s pretty easy to see how inserting personality into such robots could be beneficial. If a small robot that follows you around all the time is warm and bubbly and inviting and even funny, you’d likely interact with it more often and give it more personal insights. This would allow it to work with you more seamlessly, but it would also provide opportunities for abuse, like illegally sharing or selling your data or manipulating you into buying products you don’t need.

While these questions are not yet widespread in the United States, there are parallel examples that are already quickly emerging. One such use case is the increasing number of people who are using chatbots like Gemini as personal therapists, despite none of them being marketed for mental health issues.

In an ongoing consumer study, Emami-Naeini is looking at how these users respond to their chatbot of choice displaying high levels of empathy during their interactions.

“There are chatbots in the app stores that are being marketed as empathic, and we are very interested in understanding what users think empathy means and how it connects with privacy,” Emami-Naeini said. “If you perceive empathy when you’re interacting with a chatbot, would you trust the technology a bit more to share more sensitive information with it?”

Pardis Emami-Naeini

If you perceive empathy when you’re interacting with a chatbot, would you trust the technology a bit more to share more sensitive information with it?

Pardis Emami-Naeini Assistant Professor of Computer Science

In early results, research is finding that most people do in fact want more empathy from a chatbot when they’re using it for therapy or as a companion. But at the same time, they don’t want empathy when using it for work advice like they would a colleague.

Most people also have an understanding that getting more empathy from a chatbot will require them to share more information with it. When it comes to understanding what the AI company’s privacy policy is, however, there is a large disconnect. Many people using chatbots for therapy believe that their information is protected by HIPAA, which it is not. And few take the time to really delve deeply into the small print of the privacy policy.

To help with this issue, Emami-Naeini is working with Google to develop a sort of “privacy nutrition label” that is transparent and easy to understand. Much like the stats found on your favorite breakfast cereal, this infographic would quickly convey how strong a company’s privacy policies are to consumers so that they can compare and make more educated choices. “There are technical ways to protect peoples’ privacy, like personal information used to train your own AI or robot can be kept within your own home and not shared externally,” Emami-Naeini said. “But even this can raise some potential ethical and security concerns.”

an example of a privacy label that looks like a nutrition label

An example of a “privacy nutrition label” that Pardis Emami-Naeini has been working toward.

For example, what happens when somebody else in the household begins turning to the local AI to gather information or keep tabs on their partners or roommates? Airtags have already been used by abusers to track and keep tabs on their partners. The potential for those closest to us to misuse AI that is omnipresent within our most personal and private moments seems much greater.

And that’s before even getting into the unintended consequences of introducing AI systems with human characteristics, especially empathy. For Jana Schaich Borg, associate research professor in the Social Science Research Institute at Duke University, the unintended consequences are an especially large concern for children growing up with this technology.

Adults might easily make the mistake of thinking an AI has its own consciousness, even though, at least to this point, they absolutely do not. But adults can, for the most part, easily be corrected, and the mistake is not likely to have long-term effects on their mental health.

That is not necessarily the case, however, for children and even young adults who are interacting with these people-pleasing systems on a regular basis. “Empathy is just one piece of being human, but it’s one of the things that really makes us feel like connected to one another,” said Borg. “So as soon as an AI comes across as empathetic, you’ve reached a whole new realm.”

Jana Schaich Borg

All of this might sound like an episode of Black Mirror, but it isn’t. This is an experiment, and we are running it right now.

Jana Schaich Borg Associate Research Professor in the Social Science Research Institute

For children and young adults still developing their cognitive skills and emotional intelligence, having an AI consistently kissing up to them and only providing validating feedback could cause problems. The unending positivity could cause them to not be able to learn how to hear any negative feedback, to be resilient, or to care about others.

Or even worse, what if these interactions cause an entire generation growing up to just assume that all empathy is fake? What if their default becomes not to care about others because no one actually cares about anybody?

One solution might be to not give LLMs and other AI systems to children under 12, but as Borg points out, the ages between 12 and 25 are just as important for neural, emotional, and social development. And all our high school and college students already use these systems daily.

headlines about unintended consequences of AI chatbots

Another potentially dystopian experiment currently being run is having AI systems that display human emotions and personalities interact with one another. It is possible that, much like humans, having AI agents act nicely toward one another could improve their performance. But it’s also possible that this is the road to Skynet, or as in the movie “Her,” AI partners that eventually abandon humanity in favor of each other.

“Our daily workflows require many different types of AI agents to collaborate, and it’s an open question whether we want to have personality among them when they do,” Chen said. “That’s something I feel like may not be necessary, but also it’s something we won’t know until we actually see the outcomes.”

That sentiment may in fact be the most common consensus from experts thinking about the question of integrating emotions and personalities into AI. We don’t really know where we’re going or what it will do in the end, and part of that is because we also don’t fully understand how these systems are doing it in the first place.

Sure, we can tell an LLM to not have any emotion or personality when it interacts with us, but on a technical level we don’t really know why that works.

“All of these systems are coded in such a way that each individual step is really clear and very well understood,” said Carlson. “But there are so many steps that interact in such complex ways and at such large scales that we can’t possibly keep track of everything.”

“It’s not like biology where there’s a full-on model of a system where we understand all the mechanisms, so when we go to change something we know exactly what should happen and why,” echoed Borg. “We don’t know any of the whys. We don’t have any of that for these AI models.” “All of this might sound like an episode of Black Mirror, but it isn’t. This is an experiment, and we are running it right now.”

Input/Output Magazine

There’s an old adage that you get out of an endeavor whatever you put in. But just as important as the inputs and outputs is the slash between them—the planning, the infrastructure, the programs, the relationships. We hope the content within these pages helps you not only discover a little more about Duke Engineering, but also ideas and inspiration that make your own slashes a bit bigger.