“Siri, what do you look like?” I overheard my intrepid four-year-old ask. Siri promptly told her that she did not have a body, to which my son asked: “Are you invisible? Are you a ghost?” At this point, I decided to enter the conversation. I explained to my children, ages 4 and 6, respectively, that Siri is not a person; she’s a machine. They both exclaimed—surprised and wild-eyed— “What!”
Like other parents navigating the digital age, I decided to hold off on introducing devices like Siri, Alexa, or any other type of home assistant to my children. I follow the general wisdom and personal belief to limit digital engagement at this age because I’m not that interested in 24/7 access, and I don’t really know how to introduce these things to my children.
It was an oversight when transitioning to a new phone that led to my kids’ first encounter with Siri, and it was powerful to hear them go from fascination at hearing the voice on the phone responding to their requests, to learning about rabies and what an orca skeleton looks like. I found myself feeling excited for them and happy that they had access to all this information at their fingertips. It never occurred to me that my toddlers would soon be dealing with existential questions about humans versus machines. That’s what surprised me about their natural curiosity about Siri—who she is and what she looks like.
We are undoubtedly in a new frontier. This will be the first generation that will have to make sense, at a very early age, what it means to engage with adults and machines alike. Jonathan Haidt’s book The Anxious Generation—that urgently needs to be translated into Spanish—has captured the public’s anxiety about this dilemma. Building on increasing reports of children being cyberbullied, exposed to harmful content, and even driven to self-harm, Haidt has tried to connect for parents and caretakers alike the relationship between digital devices and platforms with decreasing child well-being writ large.

Parents have rallied around this message and pushed for stronger laws to protect children online. Critics have questioned whether the data really supports Haidt’s claims that it’s the devices themselves that lead to the harm and not a wide array of sociopolitical transformations happening simultaneously. The debate is charged and highly nuanced, and it should push us all to wonder: What if we are asking the wrong question? Yes, today’s kids are anxious, and those who spend more than three hours on social media are at heightened risk. But as critics have warned, perhaps it’s not just the screens driving these numbers—this is not the first time we’ve adopted transformative technology, and it’s certainly not the first time humans have faced existential risks. Perhaps, it’s the lack of a roadmap for growing up (or parenting) in an age shaped by algorithms, automation, and artificial identity.
Haidt, who admits he is not a clinical psychologist, paints a vivid picture in his book of a childhood transformed—rewired. He points to the combination of the decline of play-based childhoods, exacerbated by what he describes as overprotective parents, and increasing smartphone use as the cause for the childhood mental health epidemic. This assertion has been widely criticized because it lacks the foundational evidence to show that they are not just associated facts, but true correlations.
In her review, Candace L. Odgers, associate Dean for research and a professor of psychological science and informatics at the University of California, Irvine, argues that while these assertions can be independently true about social media, there is no evidence that using platforms is rewiring children’s brains or driving an epidemic of mental illness. I’ve spent the greater part of my career thinking about how governments and the public can leverage technology to build thriving communities. One of my key observations from that work is that we have failed to truly help communities transition into the digital age, with little to no digital literacy or upskilling investments that could empower communities to harness the technology for their own interests.
In this era of uncertainty about the future of our democracies, and the rapid evolution of these technologies, I wonder: Are we lacking roadmaps or frameworks to understand both how to use these technologies and how to retain our sense of self and flourish in the digital age?
The critique of Haidt’s book can serve as an important starting point for answering these questions. Haidt is intentionally using a public health frame to galvanize the public around this problem, but he is not laying out any concrete framework to solve it. He calls for delaying access, producing stronger policy guardrails and norms, and reintroducing play-based philosophies to help the crisis. Like others, I would argue that we need better data. But in this age of artificial intelligence (AI), which layers additional complexities to traditional social media platforms, we also need a better framework or mental models for assessing health and well-being. This is where established approaches like the social determinants of health (SDoH), can help us all better define, track, and advocate for the type of engagement with technology that we want for our kids and ourselves.
Evidence-based frameworks can help us move beyond limits to screens to consider the socioeconomic factors that significantly influence an individual’s health, with a digital lens. As Odgers’ noted in her critique of Haidt, researchers have found that access to guns, exposure to violence, structural discrimination and racism, sexism and sexual abuse, the opioid epidemic, economic hardship and social isolation are leading contributors to sudden rise of suicide mortality in the United States. Many of these factors fall squarely within the SDoH framework. But these types of tools not only give us more precise ways of measuring the impact of such factors on youth well-being. They also lend themselves to thinking about various elements or systems we need to thrive in the digital era. Haidt has helped us recognize that something is wrong. But recognition isn’t enough. We need to build a roadmap for well-being that can be measured, tracked, and designed towards.
If we are serious about supporting the well-being of youth in the digital age, we need to move beyond moral panic and begin designing for systems that allow us to thrive and flourish. The SDoH framework is grounded in the idea that wellness doesn’t arise from individual choices alone. The conditions in which we live, play, and work significantly impact our health—economic stability, education access and quality, health care access and quality, neighborhood and built environment, and social and community context are as significantly associated with health outcomes as genetics.
When we examine the digital ecosystem that youth are growing up in, we can look concretely to see if these environments are reinforcing these pillars of well-being or if they are quietly eroding them. The social media platforms that dominate our digital landscape have been shown to thrive on conflict, disconnection, and attention extraction. Per the SDoH framework, we know that in real life, individual well-being is measured by stable relationships, safe environments, access to health care and opportunity, and a sense of belonging. So, what if instead of taking kids’ phones away, we built and navigate them through a digital environment that promotes connection, creativity, autonomy, and safety?
The children’s safety online debate has reached a global turning point. New America’s Open Technology Institute (OTI) has reported that in 2023 and 2024, nearly 100 bills were introduced across the United States requiring greater parental consent, age restrictions, or safety-by-design measures. Many of these laws target youth access to online adult content and sales that are age-gated in real life.
Across the world, countries are also developing legislation and efforts like the UK’s age-appropriate design code, which aims to outline design standards to better protect children online. Few, if any, of the dominant models aim to empower young people and their caretakers to navigate the internet safely—or with an eye towards connection, agency and creativity. I would argue that the solutions to complex challenges are never that simple, and in fact OTIs research has found that most of the technical solutions have gaps and huge trade-offs on rights, privacy and security. Design codes are a great start but wholly focused on the platform’s responsibility and preventing something bad from happening not on the ability of users to thrive and flourish. As generative AI becomes an increasingly important part of this picture, it is more critical than ever to get it right.
Tools powered by generative AI did not enter a neutral environment. By the time ChatGPT was released in November of 2022, we already knew thanks to Facebook whistleblower, Frances Haugen, that leading companies like Facebook knew their platforms were having adverse effects on young users, particularly their mental health. Since the release of ChatGPT the world has seen the power of AI to amplify cyberbullying, the spread of nonconsensual images and even the dissonance and the harm that AI “friends” can cause. Algorithms that prioritize engagement will continue to feed youth more extreme and harmful content. AI-powered surveillance tools in schools and communities will continue to impact the marginalized. Generative AI will only stand to blur reality, distort identity and make it harder for youth to discern what and who is real— “Siri are you a ghost?!”
Regulatory efforts are necessary, but they’re not sufficient. Age-appropriate design code, transparency requirements and stronger privacy protections are all part of the answer. But if we want to truly build a digital world in which all children are safe and can flourish, then we need a strategy that moves beyond regulation and defines, measures and tracks the values we build into the technologies from the start.
This suggests leaning on public health approaches that look at the digital ecosystem cohesively, treats youth not just as users (to be protected), but as co-designers of the tools that shape their lives, and empower parents and caretakers to support their kid’s navigation of these digital tools. We need to move beyond limiting screen-time hours to ensure that their screen-time increases their sense of belonging, supports their creativity, advances exploration and opportunity, and expands their resilience. We need a roadmap, or better yet a new north star for how to navigate the digital era.
We’ve entered an era where the forces shaping childhood—from algorithms to AI tutors—are increasingly invisible and personalized. If we want the next generation to flourish, we can’t settle for whack-a-mole legislation or nostalgic calls to unplug. We need a deeper framework—one that sees youth well-being not just as the absence of harm, but the presence of opportunity, connection, and cognitive freedom.
In a recent piece, I argued that our governance of AI should look less like arms control and more like the Human Genome Project: collective, coordinated, and human-centered. The same principle applies here. We need to define, together, what it means to grow up well in a digital world, and then design toward that goal, across sectors and systems. The kids are not just anxious; they’re signaling something deeper: we haven’t yet built the digital public infrastructure—or the roadmap—that deserves their trust.
Lilian Coral is the Vice President, Technology & Democracy Programs, Head of the Open Technology Institute.