Technology

Snapchat adds new safeguards around its AI chatbot


Snapchat is launching new tools including an age-appropriate filter and insights for parents to make its AI chatbot experience safer.

Days after Snapchat launched its GPT-powered chatbot for Snapchat+ subscribers, a Washington Post report highlighted that the bot was responding in an unsafe and inappropriate manner.

The social giant said that after the launch it learned that people were trying to “trick the chatbot into providing responses that do not conform to our guidelines.” So Snapchat is launching a few tools to keep the AI responses in check.

Snap has incorporated a new age filter, which lets AI know the birthdate of the users and supplies them with age-appropriate responses. The company said that the chatbot will “consistently take their age into consideration” while conversing with users.

Snap also plans to provide more insights in the coming weeks to parents or guardians about children’s interactions with the bot in the Family Center, which was launched last August. The new feature will share if their teens are communicating with the AI and the frequency of those interactions. Both the guardian and teens need to opt-in to using Family Center to use these parental control features.

In a blog post, Snap explained that the My AI chatbot is not a “real friend,” and to improve responses it uses the conversation history. Users are also notified about data retention when they start the chat with the bot.

The company said that the bot only gave 0.01% of responses in a “non-conforming” language. Snap counts any response that includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups as “non-conforming.”

The social network mentioned that in most cases, these inappropriate responses were the results of parroting whatever the users said. It also noted that the firm will temporarily block AI bot access for a user who is misusing the service.

“We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding OpenAI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service,” Snap said.

Given the fast proliferation of AI-powered tools, many people are concerned about their safety and privacy . Last week, an ethics group called the Center for Artificial Intelligence and Digital Policy wrote to the FTC, urging the agency to stop the rollout of OpenAI’s GPT-4 tech, accusing the upstart’s tech of being “biased, deceptive, and a risk to privacy and public safety.”

Last month, Senator Michael Bennet also wrote a letter to OpenAI, Meta, Google, Microsoft, and Snap expressing concerns about the safety of generative AI tools used by teens.

It’s apparent by now that these new chatbot models are susceptible to harmful input and in turn, give inappropriate output. While tech companies might want a rapid rollout of these tools, they will need to make sure there are enough guardrails around them that prevent the chatbots from going rogue.



Source link