We need to change the conversation about AI in our classrooms.

Often I see an article, graphic, or professional development session addressing artificial intelligence in education with the familiar “red light, green light” approach. You know what I mean – those neat charts categorizing AI uses into “acceptable” and “unacceptable” boxes, or the more nuanced traffic light systems dividing AI usage into red (forbidden), yellow (caution), and green (encouraged) categories.
While these graphics are made with the best intentions, I believe these frameworks miss the mark. They address symptoms rather than the deeper educational purpose we should be focusing on. As an AI optimist, I believe we can do better.
A Simple Question from the Past
During my teaching career, I worked in a school district with an elegantly simple acceptable use policy for technology. It contained just one foundational principle: “All use of the system must support education.”
That’s it. No complex matrices or color-coded charts.
This straightforward principle gave us a compass for navigating the rapidly changing technology landscape of the early 2000s. When students asked if they could use a new website or tool (like YouTube, at the time), I simply asked, “Does it support your education?” It shifted the conversation from policing behavior to purpose-driven decision making. It centered the learning, not the rules. And I found most of my middle school students did a great job of discerning the answer to that question. Pretty soon, they stopped asking.
As AI transforms our educational landscape, I find myself returning to this principle, but with a slight refinement for our current moment:
Does this use of AI limit my learning?
Beyond Cheating and Plagiarism
Much of the current conversation around AI in education fixates on cheating, plagiarism, and the integrity of student work. These are legitimate concerns, but they frame the issue negatively and reactively. They position educators as gatekeepers rather than guides.
Instead of asking, “Is this student cheating?” what if we helped students ask themselves, “Is this use of AI limiting my learning?”
This reframing accomplishes several important things:
- It puts the focus on learning outcomes, not rule compliance.
- It develops student metacognition about their own learning process.
- It acknowledges AI as a tool among many, not a special case.
- It creates space for nuance in different learning contexts.
- It empowers students to make thoughtful choices.
What this looks like in practice
Let’s imagine how this approach might play out across different scenarios:
Writing a first draft: A student uses AI to generate a starting point for an essay about climate change. The AI provides a basic structure and some general points. The student then researches, refines, adds personal insights, and crafts a final product that demonstrates their understanding.
Does this limit learning? Probably not. The student is using AI as a brainstorming tool while still engaging deeply with the material.
Solving math problems: A student inputs homework problems into an AI and copies the solutions without working through them.
Does this limit learning? Absolutely. The student misses the opportunity to develop problem-solving skills and mathematical reasoning.
Language translation: A language student uses AI to check their work after attempting a translation themselves, analyzing differences between their version and the AI’s.
Does this limit learning? No – this reflective practice likely enhances learning by providing immediate feedback.
Research assistance: A student asks an AI to summarize complex scientific articles for a research project instead of reading and synthesizing the material themselves.
Does this limit learning? Yes. The student misses the chance to develop critical reading and synthesis skills central to research.
Teaching the Framework
For this approach to work, we need to explicitly teach students how to evaluate whether an AI use limits their learning. This means:
- Being transparent about learning objectives: Students need to understand what skills and knowledge they’re meant to be developing.
- Modeling thoughtful AI use: Demonstrate how you, as an educator, use AI in ways that enhance rather than replace your thinking.
- Creating reflection opportunities: Ask students to document and reflect on their AI use and its impact on their learning.
- Designing AI-aware assignments: Create tasks that incorporate AI thoughtfully or that require uniquely human capabilities.
- Building a culture of learning, not performance: When grades and outputs matter more than growth and process, students are more likely to use AI in ways that limit learning.
I know some students will still cheat. I know some students will copy the output from ChatGPT into an essay. I get it. But as an AI optimist, I choose to believe that many, many students will choose to learn – when they are included, engaged, and empowered.
The Bigger Picture
This approach isn’t just about managing a new classroom challenge. It’s about preparing students for a world where AI will be integrated into nearly every profession and domain of knowledge. By teaching students to thoughtfully evaluate how technology impacts their learning and cognition, we equip them with a critical skill for their futures.
The question “Does this use of AI limit my learning?” extends beyond the classroom. It becomes a lifelong reflective practice for all of us navigating an increasingly AI-augmented world.
So let’s move beyond the red light, green light frameworks. Let’s stop focusing exclusively on cheating and plagiarism. Instead, let’s equip our students with a more fundamental question that centers their growth as learners in an age of artificial intelligence.
Because ultimately, technology should serve learning, not the other way around.
What do you think about this essential question? Have you found other approaches that effectively guide students in thoughtful AI use? I’d love to hear your experiences and thoughts in the comments.