Key Points
- Character.AI emphasizes its ongoing commitment to user safety, particularly for minors, with new policies and tools.
- Character.AI has significantly invested in expanding its Trust & Safety team, including hiring leadership roles to bolster safety.
- A pop-up resource directs users to the National Suicide Prevention Lifeline in response to inputs related to self-harm or suicide.
- Enhanced detection and moderation remove characters violating policies using industry-standard and custom blocklists.
Character.AI has announced enhanced safety measures to improve user experience and security, especially for users under 18. In a recent update, Character.AI highlighted several initiatives over the past six months and shared upcoming changes to ensure a safe and engaging platform for all users. Character.AI has been refining its platform to align with its commitment to user safety.
Over the past months, the company has invested significantly in building its trust and safety team. Character.AI has appointed a Head of Trust and Safety, a Head of Content Policy, and expanded its engineering support for safety functions. These additions underscore the company’s proactive approach to enhancing platform safety through strengthened internal processes and a dedicated team.
In response to user inputs related to self-harm or suicide, Character.AI has also integrated a pop-up resource feature. This new pop-up directs users to the National Suicide Prevention Lifeline, providing accessible support in sensitive situations.
Looking ahead, Character.AI is preparing to roll out several new safety and product features to enhance platform security further. For minors, the platform will implement changes in the LLM designed to limit exposure to sensitive or suggestive content, thereby tailoring the experience for younger users. Character.AI also strengthens detection, response, and intervention systems for inputs that breach Terms of Service or Community Guidelines.
A new disclaimer will appear in each chat to remind users that the AI is not a real person, helping to set clear expectations about the nature of interactions on the platform. Additionally, users will receive notifications when their session exceeds an hour while maintaining flexibility to continue as desired.
Character.AI has enhanced its moderation policies for user-generated characters to detect and prevent violative content proactively. The platform filters content that may breach its policies using industry-standard and custom blocklists. Furthermore, the platform adheres to the Digital Millennium Copyright Act (DMCA) to protect intellectual property, removing reported characters that violate copyright or other policies.
Character.AI has recently removed a number of flagged characters from the platform and added them to its custom blocklists. Users may notice that chat histories with these characters are no longer accessible, a measure that reflects the platform’s commitment to maintaining a safe and compliant environment.