Meta Platforms announced on Tuesday that it is significantly broadening its “Teen Account” safety protocols, extending protections to all 27 European Union member states and introducing the framework to Facebook in the United States. This move follows intensifying pressure from global regulators and lawmakers to address concerns regarding teen mental health, online abuse, and the proliferation of AI-generated harmful content.
The expansion comes amid a wave of legal and regulatory challenges, including a recent lawsuit from the state of New Mexico seeking billions in fines and a total overhaul of Meta’s safety features. In response, Meta is leaning heavily on proprietary technology launched last year that proactively identifies younger users who may have provided false adult birthdates.
The company detailed that it will utilize advanced artificial intelligence to scrutinize profiles for contextual clues that suggest a user is underage. Beyond simple age verification, these AI tools are designed to detect and block circumvention attempts by suspected minors trying to create new accounts. Following the U.S. launch on Facebook, Meta plans to roll out these enhanced AI detection tools in the United Kingdom and the European Union by June.
Click here for more on Technology










