What Generative AI is Not

3–4 minutes

read

When it’s the wrong tool for the job.

Here’s a rule worth remembering: generative AI is not a search engine. It’s tricky because they feel similar, and the tools seduce us into trust with source links and confident responses. But they’re not built to be search tools.

Recently, I needed to confirm the building and location in an image I was using in a presentation. I asked several generative AI tools to simply identify the building in the photo, a straightforward factual question with a clear right or wrong answer. Two models gave me confident, well-articulated answers that were completely wrong. One couldn’t complete the task at all. Then I tried a reverse image search. Within seconds, I had not only the building’s name and location, but also a link to the actual stock photo.

Same question. Completely different results. And the reason comes down to how these tools actually work.

Retrieval vs. Generation

When you use a search engine, it’s doing exactly what the name implies: searching. It crawls billions of web pages, builds an index, and when you ask a question, it finds the most relevant matches and returns them to you. It’s pointing you toward something that already exists.

Generative AI does something fundamentally different. It doesn’t look anything up. It generates a response, word by word, based on patterns learned during training. It’s not retrieving a fact. It’s producing text that is statistically likely to be a reasonable answer. Those are very different things.

I love this analogy: a search engine is a librarian who knows where every book is, and can help you find the one you need. Generative AI is a very well-read colleague who will answer your question from memory, sometimes brilliantly, sometimes confidently wrong, and you can’t always tell which is which.

Confident Does Not Mean Correct

This is the part that trips people up, educators and students alike.

Generative AI doesn’t hedge the way a careful thinker does. It doesn’t say “I think” or “you might want to verify this.” It just answers. In full sentences. With a tone that reads as authoritative! The fluency of the response can feel like accuracy, but the two are unrelated.

Here’s where human expertise also matters. One of the AI tools placed the building in a city near me, and I didn’t need to fact-check the results to know that the response was wrong. We talk a lot about lateral reading and following source links, but sometimes all the verification you need is a simple gut check — and trusting what you already know.

The risk isn’t that AI is always wrong. It’s that it can’t always tell when it is, and neither can we, unless we’re paying attention.

A Framework for the Classroom

This is where AI literacy becomes incredibly important. Teaching students to use AI well isn’t just about knowing how to prompt (in fact, it’s probably less about that than we think). It’s about understanding what these tools are actually designed to do.

A useful question for planning which tool to use: are you trying to find something, or think through something?

If the task requires verification, a fact, a date, a source, an image, or current information, use a search engine. These are retrieval tasks. The answer exists somewhere, and search is built to find it.

If the task requires synthesis, explanation, analysis, or brainstorming, generative AI can be a genuinely powerful partner. These are generation tasks, and that’s exactly what the tool is designed for.

Teaching students to ask that question before they open a tool is a small practice with a big payoff. It shifts AI use from reflexive to intentional, which is really what AI literacy has always been about.

Discover more from Becky Keene

Subscribe now to keep reading and get access to the full archive.

Continue reading