In 2021, former Meta employee Frances Haugen exposed internal documents showing that the company knew its platforms were having adverse effects on young users, particularly their mental health. Since then, parents and policymakers around the world have been galvanized by instances in which children were harmed or engaged in self-harm, at least in part because of their experience online.
In the United States, parents have mobilized and called for changes that would hold companies accountable and make online spaces safer for young people. Proposed solutions to online safety are varied, ranging from technical to educational. But some of the most prominent proposals, while well-intentioned, are advancing measures that imperil the rights, privacy, and security of all users—both children and adults.
Given the global reach of social media companies, the absence of global governance institutions, and the lack of international consensus on how to regulate them, individual nation states are taking different approaches to keeping youth safe online. Their methods range from defining the legal obligations of private entities to mandating particular designs or parental consent features.
For countries like Mexico, operating with limited regulatory frameworks for holding social media companies accountable for what content children can access online, some technical challenges and policy lessons from the United States and other countries could help guide a more informed national dialogue. Mexican policymakers should favor focused solutions to specific problems, rather than broad approaches to “kids’ safety” in general; explore socio-technical interventions and not just simple-sounding tech solutions; and avoid overbroad approaches to age verification in favor of the narrow application of privacy-and-security protective methods to content that is already legally age-restricted.
Countries around the world are experimenting with various approaches to address children’s safety online—but “children’s safety” is a broad category to contend with, and solutions are often just as complex as the issues. Sweeping legislative and policy efforts that promise to “fix” children’s safety online often starts with a broad set of problems they are trying to address.
These problems vary from sharing non-consensual intimate images online, to grooming by pedophiles, excessive use of social media, and even struggles with loneliness. Many of these online safety issues require a nuanced approach that acknowledges and addresses the complex socioeconomic, community, and tech-based contributing factors. Addressing youths’ well-being also requires recognizing the vast breadth of social, developmental, and online contexts of the millions of children, teens, young adults, and families across the globe. The failure to narrowly identify specific problems and proportionate solutions is also accompanied by another trend—the search for largely technical solutions.

Some leading examples come from the United States and United Kingdom, which have sought to impose a “duty of care” on companies, making them legally responsible for failures to prevent minors from accessing harmful content. Alongside the Online Safety Act, the UK has also put forth an age-appropriate design code outlining standards to better protect children online. The code has inspired similar initiatives in the United States, the European Union, Canada, and Indonesia, and it seeks to inform techniques used to persuade young people to spend more time online, shape the content they are encouraged to engage with, and tailor the advertisements they see. This design code is further supported by the European Union’s sweeping Digital Services Act, which established transparency and reporting mandates, content moderation policies, and increased parental controls. As a last resort, some countries are going even further to outright ban youth access. In 2024, Australia banned social media for all users under 16 years old, with countries like Norway and US states like Florida and Utah are also raising age requirements for social media use.
Yet many of these solutions, both tech—and policy-based, often fail to address the root causes of the issues they are trying to address, and some may result in more harm than good. Online age verification is a prime example. All over the world, online age verification has become a central component of technical solutions because, in the absence of solutions to complex challenges, it seems most straightforward to restrict access by age.
New America’s Open Technology Institute works to ensure that digital technologies benefit all people and serve core internet principles like openness, privacy, and security. Over the last two years, we have taken a close look at age verification, starting with a comprehensive 2024 report on the tradeoffs inherent in various methods of age verification. What we have learned is that while age verification might seem like a simple thing to implement in a store, it is far more technically difficult online.
Online age verification also opens a series of questions about how we define age-appropriate content, the scope of privacy and speech rights we afford children to privacy (and at what age), and where in the digital ecosystem we believe is the most optimal way to verify age. These are not simple questions to address in any context, but when we layer in regional differences in governance, rights, and culture, we are left with a worldwide hodgepodge of solutions that puts global internet freedom at risk and still leaves young people vulnerable.
Currently, websites, apps, and platforms use a variety of methods to try to determine a person’s age. These methods are collectively referred to as "age assurance" and fall within four general categories. "Age gating"simply entailsasking users to self-declare their date or birth or age to access content. "Age estimation" or estimating a users’ age by analyzing factors such as a user’s profile activity and history—or even through a selfie."Third-party verification"relies on another entity to confirm a user’s age, such as by referencing linked accounts, having parents or other users vouch for a minor’s age, or inspecting identifiers such as a credit card. Finally, "age verification", which returns to a high level of certainty, usually occurs using government identification or biometrics.
These age assurance methods offer varying levels of accuracy—but greater accuracy usually comes at the cost of accessibility, privacy, and data security. To determine a user’s age with certainty, one needs more personal information from that individual. Unlike a restaurant or grocery store, a person online doesn’t get to put the ID back in their pocket. Multiple entities might handle digital copies of their ID card, face scan, or other identifying information. This creates significant privacy and data security vulnerabilities and raises concerns that governments could attempt to acquire and misuse this information.
At the same time, age verification requirements can have larger implications for people’s right to speech and access to content. Those without accepted forms of ID or those unable to obtain accepted ID may be blocked from accessing online spaces that they otherwise could. This type of infringement on user rights has fueled litigation challenging age-verification mandates in the United States, France, and Germany.
Privacy-and security-protective age verification, or “checking the age of an internet user, without necessarily needing to know their identity” is possible. Many national governments are experimenting with privacy-preserving methods of verifying age. Following the passage of the Digital Services Act, the European Commission established a task force on age verification that outlined ten requirements for age assurance solutions. These principles put forth a privacy-preserving and secure method for age verification, which is currently being developed as part of the Commission’s European-wide EU Digital Identity framework. Across Europe, several countries (including France and Spain) are moving forward with privacy-preserving age verification requirements. Meanwhile, Australia is conducting an age assurance technology trial to examine the effectiveness and maturity of different options for age verification.

Other countries are trying to implement age assurance laws with currently available, less privacy-protective solutions. Different cultural and legal landscapes will inform how jurisdictions advance age verification requirements.Each jurisdiction’s approach is also shaped by existing digital rights and data protection. In the EU, the GDPR serves as the foundation for future initiatives. Similar general data protections such as those in Brazil, India, Nigeria, and South Korea outline how users’ personal data can and cannot be used. In jurisdictions without an overarching data protection scheme, other legislation may be applicable. For example, while the US does not have a federal data protection law, the Children’s Online Privacy Protection Act creates special data protection for users under the age of thirteen.
In addition, age verification requirements are shaped by jurisdictional concerns such as minor access to social media, sexual content, online gambling, gaming, etc. These concerns, alongside the recognized rights of children online and scope of freedom of speech, will shape a jurisdiction’s approach to youth online safety. The UN Convention on Rights of a Child is often referenced in youth online safety initiatives from signatory countries such as Australia, the United Kingdom, Sweden, France, and Brazil. In addition, countries, like Mexico have their own legislation outlining the rights of a child in digital spaces to consider. Further, age verification requirements may face more scrutiny in places with a more expansive approach to freedom of speech, such as the United States.
Finally, national ID initiatives will shape how a country pursues age verification mandates. Countries with national ID or digital ID schemes may find it easier to implement age verification mandates using hard identifiers than countries where formal ID is not widely accessible.
In Western countries, where the bulk of youth safety online initiatives are gaining traction, measures are largely focused on mental health impacts, access to age-inappropriate content, and child pornography and nonconsensual images. However, in Mexico, the youth online safety conversation also revolves around “digitization” of harms, or how digital spaces can amplify physical harms to young people. While it is widely reported that in Mexico, structural factors such as poverty, family violence, and socioeconomic inequality already drive the exploitation of children in a myriad of ways, the data shows that digitization is exacerbating this factor.
According to the Internet Seguro para Tod@s report by the Asociación de Internet MX (AIMX), approximately 80 % of children and adolescents in Mexico use the internet daily. This statistic, derived from the 2020 National Survey on Availability and Use of Information Technologies in Households (ENDUTIH), underscores the significant digital engagement among minors and may well be an underestimation. Data from UNAMs School of Social Work shows a fivefold increase in child slavery and exploitation, going from pre-pandemic estimates of 30 000 minors being recruited into forced activities up to 150 000 reported incidences after 2021. This data highlights the need to urgently strengthen digital protections for children in ways that lessen the risk of abuse, exploitation, or slavery. Several prominent studies and reports have assessed Mexico’s complex set of federal and state laws, along with guidelines from regulatory bodies aimed at protecting children online.
While the National Institute of Transparency, Access to Information and Protection of Personal Data (INAI) has played a key role in ensuring government transparency and personal data protection, the current regulatory landscape stands to change dramatically in great part due to the constitutional reform on November 2024 known as "organic simplification."
That reform dissolved seven autonomous constitutional bodies, including the INAI, transferring responsibilities regarding access to information, transparency, and personal data protection to a body within the federal public administration, which will assume responsibility for protecting personal data held by private and public entities. Subsequently, in February 2025, President Sheinbaum proposed a federal Law on the Protection of Personal Data Held by Private Parties (LFPDPPP), which follows the same principles, rights, procedures, and sanctions of a similar 2010 law, except for certain specifications. Namely, the new law states that the Ministry of Anticorruption and Good Governance would become the sole competent authority to enforce the LFPDPPP and, therefore, to protect personal data held by private entities. This is a body under the executive branch, which lacks budgetary and functional autonomy. This change should be considered a setback since the INAI had a constitutionally granted autonomous status.
Various studies on this matter have focused recommendations around three core areas: 1) a standardization of data collection; 2) a greater need for legislative harmonization and a regulatory framework with explicit language defining criminal offenses or cybercrimes committed against children; 3) public policies for the prevention, combat, and attention to cyber victimization. As Mexico considers measures to address these challenges, its policymakers and civil society that should heed specific lessons informed by the United States and global experience.
First, Mexican policymakers should avoid trying to legislate broad solutions to kids’ safety challenges and instead narrowly define the problems they wish to address. Second, they should consider social and socio-technical solutions, not just technical “quick-fixes” that often-put data security, privacy, and human rights at risk. Many “tech” problems require a mix of solutions, including non-technological ones. For example, federal and local law enforcement in the United States lack adequate resources to investigate tips about child sexual abuse material online. Investing in these agencies’ ability to improve the report-to-prosecution pipeline is not a tech solution, but it would urgently help to address a societal problem exacerbated by technology.
Third, as countries around the world experiment with age verification, Mexican federal and state governments should remember that blanket online age verification laws are a recipe for unchecked surveillance and cybersecurity vulnerabilities. Instead, Mexico should take a narrowly tailored approach to age verification in which strict age verification with government identifiers or biometrics is only applied to already legally age-restricted sales and activities. Examples include online gambling, buying alcohol, or accessing pornography. In these cases, requiring the use of privacy-protective techniques like encryption and zero-knowledge proof can limit the risks of data breaches, identity theft, and the potential misuse of sensitive data by governments.
Safety online is not an easy endeavor; nor is it an individual sport. More civil society tables should be set to bring together these varied countries and approaches to learn about the problems and solutions that we can all take to navigate the web safely and securely.
Sarah Forland is a policy analyst with the Open Technology Institute (OTI) at New America.
Prem Trivedi is the Policy Director of New America’s Open Technology Institute.
Lilian Coral is the Vice President, Technology & Democracy Programs, Head of the Open Technology Institute.