The Expertise Paradox

3–5 minutes

read

What AI Reveals About Learning

Or “Why your excitement about AI might depend on what you don’t know yet”

A stylized, colorful illustration showing two faces in profile facing each other. The face on the left appears human, composed of warm tones like red, orange, and pink with smooth gradient shapes. The face on the right appears robotic, made of cool blues and purples with geometric circuitry, mechanical components, and layered digital textures. The contrasting styles highlight the relationship between humans and technology.

I was talking with a game designer friend who showed me some AI-generated code for a simple game. He scrolled through it with annoyance, pointing out inefficiencies and logic gaps. “This is garbage,” he said. “I could write this better myself. If I used this, I’d have to fix so many problems in the code.”

I recently met a new business owner who was beaming about the prototype app she’d built using generative AI to help with the coding. “This is incredible,” she told me. “I never could have done this on my own.”

Same technology. Completely different reactions. And here’s what’s fascinating—they were both right.

There’s a pattern I’ve been noticing: for experts in a field, AI outputs often feel frustratingly inadequate. Experts see the gaps, the oversimplifications, the places where nuance got flattened. But for a novice or someone working outside their expertise, AI feels like magic.

At a Canva event, the marketing professionals around me commented how AI generated content tends to have bland phrasing and misaligned strategy. The classroom teachers I support to use AI are thrilled with the results. They finally have a way to create professional-looking newsletters without hiring someone or spending hours struggling with it themselves. Neither group is wrong. The marketing pros know enough to see the limitations. The teachers know enough to see the possibilities.

So what does this mean for our students?

Here’s where it gets interesting for education, because we’re preparing students for a world where both perspectives matter. A student using AI to help write an essay about a book they haven’t read is fundamentally different from a student using AI to brainstorm ideas after deeply engaging with that book. The first is substitution. The second is augmentation. But from the outside, they might look identical.

The game designer dismisses AI-generated code because his expertise lets him see its flaws. The business owner celebrates it because it helps her do something she couldn’t do before. Our students will need to be both people at different points in their lives—expert enough to recognize limitations, but open enough to leverage new tools for growth.

The real question isn’t “Should students use AI for this task?” It’s “What level of expertise do we want students to develop in this area, and how can AI support—not replace—that development?”

I think this comes down to three principles:

Depth before delegation.

Students need foundational knowledge before outsourcing tasks to AI. You can’t evaluate an AI-generated Spanish translation without learning Spanish. You can’t assess an AI’s math solution without understanding the concepts. This doesn’t mean expertise in everything—that’s impossible. But we need to be thoughtful about when AI helps build understanding versus when it short-circuits learning.

Metacognition matters more than ever.

Students need to ask: “Do I know enough about this to evaluate whether the AI output is good?” In a science class I visited recently, after students used AI to plan an experiment, the teacher asked them to rate their confidence in the AI’s suggestions and explain why. The students who said “I’m not sure if this is right” and could articulate why were demonstrating exactly the thinking we need.

Embrace the beginner’s use case.

There’s real value in using AI to explore fields outside our expertise. The business owner building her app doesn’t need to become a professional developer. She’s solving a specific problem. Our students will face similar moments throughout their lives. They’ll need to create a basic website, analyze data for a presentation, draft communication in a language they’re learning. Using AI as a scaffold in these moments isn’t cheating—it’s resourceful. The key is understanding the difference between AI as scaffold and AI as replacement for learning.

For core competencies in a student’s area of focus, we want deep expertise that lets them see AI’s limitations. For supporting skills or exploratory interests, we want them to leverage AI to expand what’s possible. A graphic design student needs deep expertise in design principles, even as they use AI to accelerate production. That same student might use AI more liberally for their portfolio website copy. That’s not their expertise, and that’s okay.

The dichotomy between expert skepticism and novice enthusiasm isn’t a problem to solve. It’s a reality to embrace. What matters is helping students develop the wisdom to know the difference. When should you rely on earned expertise and push past AI’s easy answers? When should you embrace AI as a tool that lets you do something you couldn’t do otherwise?

Our job isn’t to protect students from AI or push them toward it. Our job is to help them build the expertise and judgment to make those decisions wisely.

And maybe that’s the most important expertise of all.

Discover more from Becky Keene

Subscribe now to keep reading and get access to the full archive.

Continue reading