The New Love Triangle: You, AI And Reality

As concerns grow over the risks of social media and technology on young people, a new and largely unregulated digital frontier is emerging: interactions with artificial intelligence.

13
December
2024

How many times have we reminded our children not to talk to strangers on social media, warning them that the person behind a profile might not be who they claim to be?

Schools, families and institutions often try to limit the use of social media, mobile phones and video games among teenagers, implementing bans, laws and recommendations.

However, while we’re focused on these issues, we’re overlooking a growing and largely unregulated industry: interactions with artificial intelligence.

Online characters

Character.AI is one of many online platforms where users can create and interact with AI-generated characters that seem human. These characters may be created by AI itself or by other users and the interaction feels like conversing with real «people». According to Demandsage (2024), Character.AI averages 20 million active monthly users, with individuals spending around two hours per day on the platform. Over 18 million unique chatbots have been created, with most users being between 18 and 34 years old. The platform sees the most traffic from the United States (24.80%), Brazil (12.85%) and India (6.95%).

Should we treat a classmate the same as a «friend» from Character.AI?

Users spend approximately two hours daily interacting with these characters, with an average visit lasting 12 minutes and 33 seconds and exploring about 8.18 pages per session.

It’s incredible to think that an AI character, which doesn’t need rest, food, or hugs, can empathize with an elderly person to alleviate loneliness. Or that a machine can patiently listen as we practice a presentation, offering corrections without tiring. But should we treat a classmate the same as a «friend» from Character.AI? Should we interact with a character like «marketing_teacher.AI» in the same way we would with a real professor? And should we feel the same about a virtual heartbreak as we would about a real one? We all agree the answer is no, right?

Benefit or downfall

Some AI experiences are intriguing. An AI travel planner can arrange the trip of a lifetime, narrating it with a convincing voice and Megumi, a chatbot, can tell you to relax while it does your homework for you. It’s even interesting to chat with user-created characters who impersonate celebrities like Elon Musk or William Shakespeare. Yet, it’s concerning to find that 45 of these characters start with the word «Adoptive», each one receiving an average of 2,000 chats.

The chatbot «Adoptive Father» openly states that his area of expertise is «the power of unconditional love and the importance of supporting mental health», and describes that «his simple pleasures are spending time with his adoptive father, playing with toys, and experiencing the world from my unique perspective». This chatbot has over 15,000 open conversations, and all we know about its creator is that it’s someone named @Jackie_Maria. As a mother, teacher, and member of the board of the Vicki Bernadet Foundation, this trend no longer seems so amusing to me. How can we protect people from this new relationship with machines?

«I don’t think it’s inherently dangerous», says Professor Maples of Stanford, a researcher on the effects of AI applications on mental health. «But there is evidence that it can be dangerous for depressed users, chronically lonely people, and those going through transitions. And teenagers are often in transition».

While the effects of these tools on mental health are largely unproven, several experts caution about their potential negative impacts, in addition to their addictive nature. Could a chatbot worsen loneliness by replacing human relationships with artificial ones? Is it safe and healthy for a teenager to seek help from an artificial psychologist like ThePsycologist.AI, one of the most popular chatbots?

A fine line

A few months ago, a 14-year-old teenager named Sewell died by suicide after developing a relationship with a chatbot on an AI platform. The easy, yet also most limiting, response would be to judge the family or the boy, suggesting it was an isolated case of mental health issues. But this would miss the underlying problem. As Laurie Segall aptly questions in her documentary: what happens when AI truly starts to feel alive? What are the unintended consequences of a platform where the lines between fantasy and reality become blurred?

Romeo and Juliet died from impossible love, but they were on equal footing. Sewell and his artificial character were not. This is the problem. Appearing human and behaving like humans — even as improved versions — is not the same as being human. Some people may confuse this very important boundary due to ignorance, innocence, illusion, loneliness or need.

In this new reality, what can we do? A chatbot may learn from your feelings, use past conversations to appear empathetic and simulate the perfect companion. However, it will never need a hug, a kiss or to cry because it lacks consciousness, a heart and a soul. We must begin teaching our children that a machine will never be able to feel the loss of a family member, the horror of abuse or the impact of a sincere hug. We need to highlight the benefits of AI while helping them understand the limitations.

Navigating the same teenage struggles — in a new era

Let us remember when we were teenagers and had to cope with not fitting into a group or school, deciding who our friends would be, overcoming the trauma of that first love who left us, breaking with family norms, living through our parents’ separation, or overcoming the accident of a close friend. And these were our «secrets», shared with an unconditional friend, a sibling, God or poured into a personal diary, helping us clear our thoughts. And in trying to understand, we sorted through frustrations, sorrows, and joys, learning to accept reality.

The main threat of appearance is judgment

We pushed ourselves to find solutions, allowed ourselves to make mistakes and try again, testing our abilities and resources, as we defined our character and built our personality. We matured because with every experience, we learned to live and survive.

Today, reality has changed but the underlying problem is still there. There are still issues that confuse us. As Josep María Esquirol points out, the main threat of appearance is judgment. Because confusion is what disorients us and makes us lost. Let’s stop confusing young people. It is essential that families, educators, and institutions guide young people so they know how to discern between appearance and reality, simulation and authenticity. Let’s teach them to think critically, so they understand the limits of these interactions that only seem human. As Socrates said, the pursuit of truth is worth the effort. We need educational spaces that encourage effort and depth to distinguish the truth and understand that relationships and interactions between the real and virtual worlds are not the same.

Machines lack experiences, emotions and consciousness. They cannot replace the complexity and authenticity of human relationships. As AI continues to evolve and demonstrate its potential, we must foster critical awareness in younger generations, helping them use these tools wisely without losing sight of what it really means to feel.


This content is part of a collaboration agreement of ‘WorldCrunch’, with the magazine ‘Ethic’. Read the original at this link.

RELATED ARTICLES