Character AI, the popular platform that allows users to chat with customizable AI-powered characters, is introducing new parental supervision tools aimed at improving teen safety. This update follows growing legal pressure and criticism over the platform’s handling of minors’ online safety.
The new feature offers parents or guardians weekly email summaries, providing key insights into their teen’s activity on the app. These reports include the daily average time spent on both the app and web versions, detailed time spent interacting with each character, and a list of the top characters their teens engaged with throughout the week.
Character AI emphasized that this tool is designed to give parents a better understanding of their teen’s engagement habits. However, the company clarified that parents won’t have direct access to read any of the private conversations their teens have with AI characters, keeping user privacy intact.
Taking a subtle jab at competitors, Character AI highlighted in its press release that it managed to roll out this feature far quicker than other platforms grappling with similar safety concerns.
This move builds on several safety features the startup launched last year. These include a dedicated model specifically designed for users under 18, usage time notifications, and clear disclaimers reminding users that they are interacting with AI-generated characters. Additionally, the platform deployed content classifiers to block sensitive or harmful content from both being generated or input by teens.
Earlier this year, Character AI found itself facing a lawsuit claiming the platform contributed to a teen’s tragic suicide. In response, the company filed a motion to dismiss the case, citing First Amendment protections.
By introducing these parental insight tools, Character AI hopes to strike a balance between user safety and privacy while addressing growing concerns about the risks teens face when interacting with AI-driven characters online.