Business
New AI-Powered Age Assurance Measures to Place Teens in Age-Appropriate Experiences
We want young people to have safe, positive experiences online. That’s why we automatically place teens in default, age-appropriate experiences, like Teen Accounts. Today, we’re sharing updates on the age assurance technology we use to help ensure teens are in the right experiences for their age. This includes a deeper look at our ongoing work to strengthen underage enforcement, including the addition of AI visual analysis and other advancements; expanded protections for teens who we suspect misrepresent their age on Instagram in the EU and Brazil, and on Facebook in the US; and our ongoing efforts to help parents to talk to their teens about providing the correct age online.
For over a decade, we’ve built tools, features, and resources to help teens have safe, age-appropriate experiences on our apps. This includes launching Teen Accounts on Instagram, Facebook, and Messenger with built-in protections that limit who can contact teens and the content they see. We’ve also revamped our content policies to automatically place teens under 18 into a 13+ content setting.
To make sure teens on our apps are placed in these default experiences, we need to know their age. However, knowing someone’s age online is a complex, industry-wide challenge. That’s why we continue to invest heavily in age assurance, including using sophisticated technology to find people we believe are teens, even if they list an adult birthday.
Continuing to Strengthen Underage Enforcement
We require everyone to be at least 13 to use Instagram or Facebook. For years, we’ve worked to find and remove accounts that belong to those we believe are underage. Today, we’re providing more detail on our ongoing efforts to develop advanced AI that detects underage accounts, including the use of visual analysis to look beyond simple admissions of age.
This includes using AI technology to analyze entire profiles for contextual clues — such as birthday celebrations or mentions of school grades — to determine if an account likely belongs to someone underage. We look for these signals across various formats, like posts, comments, bios, and captions, and we’re continuing to expand this technology across additional parts of our apps like Instagram Reels, Instagram Live, and Facebook Groups. If we determine an account may be underage, it will be deactivated and the account holder will need to provide proof of age through our age verification process to prevent their account from being deleted.
We’re also adding visual analysis as a new technique to aid our detection efforts. This technology allows our AI to scan photos and videos for visual clues about a person’s age that text might miss. We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image. By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove.
In addition, we’re making it easier for our community to report underage accounts by simplifying our reporting flows. This includes making it easier to submit a report both in our app and on our Help Center. To handle these reports more effectively, we’re supplementing our human review teams with AI models that apply consistent evaluation criteria to every report. In our testing, this AI-driven review delivers higher accuracy and faster resolutions than human review alone, ensuring that these accounts are addressed with more speed and reliability.
Finally, we are working to strengthen our circumvention measures to prevent new accounts from users we suspect are underage.
While many of these AI improvements are available worldwide, certain advanced features — like visual analysis — are currently available in select countries as we work toward a broader rollout.
Expanding Technology to Place Teens in Teen Account Protections
Since 2024, we’ve enrolled hundreds of millions of teens on Instagram, Facebook, and Messenger into Teen Accounts. To ensure as many teens as possible are enrolled in these built-in protections, last year we announced technology that is designed to proactively find accounts we suspect to be teens, even if they list an adult birthday, and place them in Teen Account protections. We successfully launched this technology on Instagram in the US, Australia, Canada, and the UK — placing millions of accounts into these age-appropriate protections.
Now we are expanding our use of this technology to 27 countries in the EU and Brazil. This will ensure more people whom we suspect to be teens are proactively placed into age-appropriate protections on Instagram. We’re also expanding to Facebook in the US for the first time, followed by the UK and the EU in June. We aim to expand this use of the technology on Instagram globally throughout the year.
Continuing to Empower Parents
Parents are key partners in keeping teens safe online and we appreciate their support in helping us determine the age of their teens. This month, we will begin sending notifications to parents in the US on Facebook and Instagram with information about how to check and confirm their teens’ ages on our apps. The notifications will also include tips on how to have constructive conversations with their teens on the importance of providing the correct age online. Parents globally can access these tools and resources to support their family’s digital experiences through our Family Center.
These updates add to our existing age assurance measures, which include estimating age based on someone’s activity and reviewing user reports. If we suspect someone is misrepresenting their age to avoid our protections, for example if they attempt to change their birthday from under 18 to over 18, we require them to verify their age using an ID or Yoti’s facial age estimation tools to complete the change.
A Simpler Policy Approach to Age Assurance
While we’re investing heavily in our own age assurance technology, we know that no single company can solve this challenge alone. We believe legislation should require app stores to verify age and provide apps and developers with this information so that they can provide age-appropriate experiences, like Teen Accounts. Importantly, this approach is supported by 88% of US parents.
The fact is, all youth safety laws require knowing people’s ages. Requiring parental approval and age verification at the App Store/OS level provides a centralized, consistent, and privacy-preserving place for age assurance, rather than requiring every individual app to comply with different rules. It also helps ensure that the many apps teens use offer the same standard of protection.