What social media should have taught us about generative AI

Australia just banned social media for anyone under 16, and Denmark is moving in the same direction for 15-year-olds. After years of mounting research and growing concern, governments are finally starting to treat social media the way the evidence suggests we should: as something that can genuinely harm developing brains.
We learned something from all those years of watching the impact, from the research connecting social media use to depression and self-harm, from understanding how algorithms are deliberately designed to keep users scrolling. It took a long time, and the lesson came at a real cost to many kids. But we learned.
Now we need to apply what we learned to generative AI, and I’m not seeing it happen.
There have already been extreme cases of harm unfolding. From suicide to emotional dependencies, we’ve seen some scary results. AI companions are designed to maximize engagement, not support healthy development. These are the same patterns we saw with social media, but the companies building these tools aren’t implementing the kinds of age-appropriate restrictions and safety guardrails we’ve finally started demanding from social platforms.
Somehow, our culture has normalized social media access for children far too young. Parents allowed their children to have accounts on platforms built for adults and then acted surprised when the consequences started showing up in pediatric mental health data. We should be sounding louder alarms about early access to full-featured LLMs, but instead, it feels like we’re headed into the same pattern all over again.
And here’s what I keep coming back to, because this is really the heart of it for me: the reason the patterns are the same is that the underlying need is the same. Kids who aren’t seeking attention and validation through social media are far less likely to seek it through generative AI. A student who feels seen and valued by the people around them isn’t going to form an unhealthy attachment to a chatbot. A kid who has real relationships and a sense of belonging isn’t going to spiral into dependency on an AI companion.
The tool changes. The need doesn’t.
Which means the skills students need to navigate AI safely are the same skills they’ve always needed for social media:
- Recognizing when something is designed to manipulate them.
- Understanding that validation from a screen will never feel the same as a connection with a real person.
- Knowing when to close the app and be present with the people who are actually in the room.
As educators, we spend a lot of time debating which tools belong in our classrooms. Red light, green light. Acceptable use policies. The latest app that’s either going to revolutionize learning or waste funding. I’ve been in those meetings. I’ve helped write those policies. And they do matter.
But the harder conversation is about the human needs that drive kids toward screens in the first place. The need to be seen. The need to belong. The need for connection that actually feels real.
Those needs don’t get addressed by blocking the right websites or approving the right platforms. They get addressed by people! Teachers who notice when something seems off. Parents who are genuinely present and not just physically in the room. Community members who show up and create spaces where young people feel like they actually matter.
Yes, we absolutely need to hold tech companies accountable. Yes, we need policies that protect children from tools they aren’t developmentally ready to handle. But I think we also need to ask ourselves some uncomfortable questions about the connections we’re not making, the presence we’re not offering, and the kind of humanity that exists beyond any screen.
The tool isn’t always the problem. The tool reveals what’s missing.