
There’s been a shift in the relationship between people and the technology companies that power our lives. For years, we’ve been the ones giving something up. Now we’re the ones making demands. And increasingly, what we’re demanding is trust.
First, it was data. “If the product is free, you’re the product” became our rallying cry. Companies wanted our information: our shopping habits, our location, our connections. Data was valuable. We taught students to read privacy policies and think twice before downloading a free app.
Then it was attention. Having our data wasn’t enough; platforms needed our eyeballs, our hours, our endless scrolling. Algorithms got smarter. The scroll got more infinite. We started teaching students about screen time and persuasive design.
Now I’m watching another shift happen. And this one feels different.
Trust is becoming the new currency.
People aren’t just annoyed anymore. They feel deceived. Deceived by AI images flooding their feeds, betrayed by terms of service that now claim the right to train models on their work, and cheated by features that quietly require signing away their likeness or intellectual property. They’re tired of discovering what they gave up after they’ve already given it. The emotional intensity has changed.
When we worried about data, it was about privacy. When we worried about attention, it was about manipulation. But this? This is about something more fundamental: can I trust what I’m using? The outrage isn’t about losing privacy or time. It’s about feeling like the rules changed without anyone asking.
I think this matters enormously for how we teach digital citizenship moving forward.
What this means for students
For years, digital citizenship has focused on two big ideas: protect your data and manage your attention. Those still matter. But we need to add a third: understand trust. How to evaluate it, protect it, and demand it.
This means teaching students to ask new questions. “Is this website secure?” becomes “How do I know this image is real?” Or “How much time am I spending on this app?” becomes “What am I agreeing to when I click ‘accept’?” And perhaps most importantly: “If I post my writing, my art, my voice, who gets to use it, and for what?”
We also need to help students see that they’re on both sides of this equation. They’re consumers of content that might be AI-generated, but they’re also creators whose work could be scraped without consent. They need to think about authenticity not just when evaluating what others produce, but when deciding how to represent their own work.
The education opportunity
Here’s where I actually feel hopeful. I know, you’re shocked – this from the AI Optimist!
We’ve always taught students to evaluate sources and think critically about what they read. Those lessons aren’t outdated. They’re more relevant than ever.
What’s changed is the stakes. In a world where AI can generate content on almost any topic, students who develop real depth in a subject will stand out. The ones who can speak from experience. The ones who have put in the work and can prove it. Genuine expertise becomes a differentiator, not just a nice-to-have.
The same is true for relationships. When trust is scarce, people gravitate toward those who have demonstrated integrity over time. That’s true for institutions, for platforms, and for people.
We’ve spent years teaching students to be cautious about what they share and mindful of how they spend their time online. Now we need to teach them something harder: how to navigate a world where trust has to be earned, verified, and protected.
The rules have changed (again). And now education has to catch up.