The 3 Checks Rule: Mastering AI Literacy
(Or: An Elder Learns Things at the Lunch Table)
I learned more about artificial intelligence recently at lunch with my grandkids. This was more than I learned from the news or social media. It was also more than any breathless headline warning me that the robots are coming.
One of them—my grandson—uses AI regularly. He wants to be a firefighter. He lets AI do his writing and reading. He focuses only on the physical work. Strength. Endurance. Skills that save lives.
Honestly? That made perfect sense to me. Humans have always used tools to offload the parts of work they don’t need to do by hand.
His sister, on the other hand, refuses to use AI at all.
She criticizes him for relying on it. She’s concerned about the electricity it uses. The data storage. The environmental impact. She doesn’t want to contribute to something she sees as wasteful and unnecessary.

So there we were. Two teenagers. Same household. Same world. Completely different approaches to the same tool.
And neither of them is wrong.
But neither of them has been taught the most important part.
This Is Not an “AI Problem”
What struck me wasn’t that one uses AI and the other doesn’t. It was that both are operating without a shared framework for judgment.
My grandson trusts AI to do work for him—but hasn’t been taught how to evaluate what it produces.
My granddaughter rejects AI on ethical grounds—but hasn’t been taught that AI isn’t some separate moral universe. It runs on the same infrastructure as Google. The same data centers. The same energy tradeoffs we’ve already accepted for years.
When I gently told her, “You know this is how Google works too, right?” she paused.
Not because she was wrong—but because no one had ever connected those dots for her.
Refusing to learn how a tool works doesn’t stop its impact. It just removes your ability to judge it.

Elder Me Has Seen This Before
Here’s where I admit something.
I am an elder.
I have lived through:
- encyclopedias
- card catalogs
- early internet
- Google searches
- Wikipedia panic
and now… - AI
In the Society for Creative Anachronism, I’ve watched this same cycle repeat every time a new tool appears.
People argue about authenticity, ethics, shortcuts, and “the right way” to do things. Eventually, we realize the tool isn’t the problem.
The judgment is.
In the SCA, we don’t hand a beginner a pigment recipe and say, “Good luck.” We teach sourcing. Comparison. Experience. We teach how to tell when something looks right—or doesn’t—even if you can’t yet explain why.
That same wisdom applies here.
The 3 Checks Rule
So I gave my grandkids what I now call The 3 Checks Rule. It’s simple. It’s portable. And it doesn’t require anyone to love AI—or fear it.
Before trusting anything AI gives you, ask:
1. Source Check – Who would normally know this?
If this were correct, who would usually be responsible for this knowledge? A historian? A doctor? A craftsperson? A firefighter? AI doesn’t know things. It imitates patterns of knowledge. That matters.

2. Cross-Check – Can I find this elsewhere?
One answer is never enough. Confidence is not accuracy. If it’s real, it will show up in more than one reliable place. This is not new. This is how research has always worked.
3. Sense Check – Does this align with reality and experience?
This is the elder part. Does it fit what you already know? Does it contradict lived experience? Does it pass the “this smells funny” test?
If something fails even one of these checks, pause.
Young people are already familiar with using AI. What they aren’t being taught is how to evaluate it.
That’s not a technical skill.
That’s a human one.
Why This Is Elder Knowledge
Elders have always translated new tools into usable judgment. We give language to things people are already sensing. We offer clarity without panic. We say, “Here’s how to think about this,” instead of “Here’s why you should be afraid.”
This isn’t about banning AI.
It isn’t about outsourcing thinking either.
It’s about discernment.
Why This Matters More Than People Think

One path leads to uncritical dependence.
The other leads to ethical refusal without understanding.
Neither builds judgment.
Judgment has to be taught in schools, in families, and in guilds. It should also be taught in communities and, yes, even on Facebook. People there are clearly struggling to articulate what feels off, but don’t quite have the words.
A Calm Ending from an Elder Who Has Seen a Few Things
The future doesn’t need everyone to love AI.
It needs people who know how to question it.
And if that makes me sound like an old woman dispensing wisdom at the lunch table—well, I’m fine with that. We’ve been doing this work for a very long time.