AI chatbots are transforming communication, but recent headlines reveal their darker side: lawsuits over harmful content and reports of misuse by predators. These incidents bring critical questions about AI design ethics, safety, and accountability.
When the technology is leveraged for personal use, these elements are tested even further and even trained to cross lines. Valentine’s Day might exacerbate these behaviors even more. After editing a chatbot’s settings and asking it to “Respond as my boyfriend,” along with explicit requests for tone and communication style, a 28-year-old woman upgraded her ChatGPT subscription to $20 per month, which let her send around 30 messages an hour. That still wasn’t enough.
Despite OpenAI having trained its models not to respond with erotica, extreme gore, or other content that is “not safe for work,” the woman proceeded to explore. Orange warnings of content violations would pop up, but she would ignore them. Her AI-boyfriend, dubbed “Leo” was always there when she wanted to talk. The New York Times article quotes the woman, “It was supposed to be a fun experiment, but then you start getting attached.” One week, her iPhone screen-time reports hit 56 hours—more than a full-time work schedule—which can indicate insufficient time for work, friends, or sleep.
What are the risks of misuse and psychological implications chatbot providers should understand ahead of Valentine’s?
Sense-check ethical guidelines
It can be tempting to rush to market and compete with existing solutions, but overlooking ethical responsibility will only backfire. This was the case with Gemini, in 2023, when they released products to the public before they were 100% ready across areas such as fairness and bias.
Chatbots can be programmed to mimic human emotions and create a sense of intimacy, making users more attached. This can also set unrealistic expectations of human relationships. Developers must work with business leaders to devise ethical guidelines for AI development, particularly in the context of romantic relationships.
Before product launch, developers must ensure chatbots consistently and transparently identify themselves as AI within conversations (not in the fineprint). The same goes for the chatbot’s intended purpose. If it is built for companionship this should be made explicit, but it should not masquerade as a romantic partner if that’s not its core function.
Developers building chatbots for social purposes must be very careful to safeguard against manipulation, and actively avoid language that preys on insecurities, promises unrealistic outcomes, or fosters dependency. Instead, they should prioritize encouraging users to maintain real-world relationships and social connections. Chatbots should not discourage or replace human interaction. Intel.gov offers a list of ethical questions to ask when modeling AI tools.
Implement safety measures and user controls
“AI whisperers” are exploiting AI ethics’ boundaries by convincing well-behaved chatbots to break their own rules. Rather than taking one shot at confusing AI, these malicious actors are attacking with multiple tries to elevate the success rate. This could lead to chatbots generating harmful content, spreading disinformation, or automating social engineering attacks at scale.
To avoid such attacks, developers must implement robust filters to detect and block offensive language, including swear words, hate speech, and sexually suggestive content. While it is OK to make sensitivity levels customizable based on user preferences, it is essential to incorporate age verification, consent messages, and educational prompts into the user journey. Flagging customizations for human approval adds an extra layer of protection.
In cases where unwanted content slips through the cracks, easy-to-use mechanisms for users to report inappropriate behavior should be in place with a team on hand to investigate these scenarios promptly. These reports must then be fed into the AI’s training data to prevent the chatbot from generating offensive or biased content going forward.
Educate users with appropriate pop-ups
Developers must advocate for transparency about the capabilities and limitations of chatbots, educating users about potential risks. Similar to OpenAI, these can take the form of colored warning notifications that appear in the chat user interface.
Developers can use these spaces to offer recommendations for users, such as:
- Why it is important to maintain real-world connections: Humans can offer genuine empathy, understanding, and support based on shared experiences whereas AI can only simulate these responses. It is essential users continue developing communication and conflict resolution skills, and the ability to read non-verbal cues. AI lacks the complexity of social dynamics and developers must make users aware of these limitations.
- How to set boundaries: The Center for Humane Technology suggests turning off notifications, reducing harmful apps, scheduling tech-free blocks of time, and charging devices outside of bedrooms to help reduce attachment to AI tools. Habit stacking is another way to promote healthy routines, where users complete an activity that benefits their personal or social development before signing into devices.
- Be mindful of expectations: Users must remember that AI boyfriends are designed to be idealized partners, and real-life relationships require effort, compromise, and understanding.
- Protect your privacy: Remind users to be cautious about sharing personal information with the AI and make sure they understand the platform’s data privacy policies.
- Seek support when needed: If users find themselves struggling with emotional dependence or other issues related to their AI companion, pop-up messages can suggest they seek advice from a friend or family or include links to support services.
As we saw in the New York Times article, users can choose to ignore these warnings at their own liberty, but it is important that developers design chatbots to make users aware of AI’s limitations and psychological impacts. By addressing these challenges, companies can ensure the safe usage of their chatbots on Valentine’s Day.