Sunday Breakfast with the Thinking Project: AI vs. NI
What happens to Natural Intelligence when the machine hands us all the answers?
Years ago, when my son was a little guy, maybe 6 years old, we were walking on a New York City street, and he trailed behind me. Wherever we were going, he didn’t want to go. He was in a pissy mood, and he was going to test the limits of his agency. I slowed a bit, not wanting him to get too far behind but also not wanting him to believe that he could stop us entirely. Still, the distance between us was growing with each step, and so, annoyed, I turned.
“Hey, you wanna catch up?” I said.
And then, out of nowhere, my brain told me to say this: “Or… do you prefer mustard?”
For a full beat, old sour-puss didn’t know what to make of this bizarre question, but then it hit him. He couldn’t contain a grudging smile.
“You got that, huh?” I said, my mood genuinely brightening.
He strode right up to me and started explaining the punny thing I’d done with words. We got where we were going.
So, what happened there? In brain-science-speak (brought to you courtesy of a humanities teacher!), we had both just experienced a gamma-burst over our right anterior superior temporal gyrus and prefrontal cortices. These are the activities, sometimes called eureka moments, associated with integrating information across brain regions. That activity was quickly followed by an activation of the orbitofrontal cortex, strongly associated with the dopamine hits that provide feelings of pleasure. At least I think that’s what happened.
My question on this Sunday morning is this:
If AI will be as ubiquitous as everyone says it will be, how will it impact the frequency of human insights and the pleasure that comes with them?
I’ll get to my hypothesis shortly.
In the meantime, let’s look at how the battle lines have been drawn on the topic of AI:
Crew #1:
“Ban it.”
This crew loses conviction before the two words are even out of their mouths. Yeah, there’s something wise in the ban-it principle, but these people still know an unkillable genie when they see one. Note: these people often wind up joining the duck-and-hide crew. I was once a proud member.
Crew # 2:
“Oh hey. Take the garbage out? Okay, but it’ll have to wait because right now I’m planning my trip to Mars with my personal AI. Wait, have we cured cancer yet?”
Nuff said.
Crew #3:
“Love you just the way you are, AI. It’s education that has to change.”
Stockholm Syndrome. Possibly a Billy Joel overdose.
It’s this third crew—at least their claim about education—that I want to address here. (Loving AI—and I do mean literally—is a separate, and believe it or not, real, problem.)
In any case, the more fleshed out version of Crew 3’s argument seems to go like this:
AI is amazing and here to stay, so instead of trying to stop kids from cheating on your outdated assignments, you teachers need to give students better assignments. AI can already write essays, summarize books, create presentations and make calculations. Heck, the calculator broke the computational barrier! Focus education on higher-level thinking and on learning by doing. Get out into nature. Try to solve some real-world problems. No boring stuff.
If it’s possible to applaud and simultaneously shake one’s head in deep concern, that’s what I’d want to do.
First, progressive schools have been operating on this learning-by-doing principle since at least 1896. I teach at one. But I would argue that the success or failure of such schools has mostly depended on educators’ ability to incorporate “the boring stuff” into the exciting stuff, or rather, to see them as mostly inseparable. The great teachers know that there is much to be gained in doing things that AI spits out with one hand tied behind its back.
Put another way, it is a mistake to believe that because a machine can get us to various products, almost instantly, that there is nothing lost in eliminating the process of doing them ourselves.
It’s not that a trained adult mind needs to always do the work that the machine can do—although occasional tune-ups might be a good idea. If you can have the computer write those X-73 reports for you, the ones that you understand backward and forward because you wrote them for years, and now you have more time to ponder the beauty of birds in flight, I say go for it. You’ve read some books about The Cuban Missile Crisis and now want to ask AI a question that none of the books seems to answer? Do it.
It’s more that the untrained child’s mind can’t become trained until it has done the foundational work that AI threatens to make extinct. I’m pretty certain that we need to develop Natural Intelligence for an extended period of time—into a person’s early 20’s, I’d bet—before the current version of the artificial kind can enhance it.
Imagine a college student with a neglected natural intelligence—one that never had to struggle to answer a question that AI could answer—trying to use ChatGPT to enhance his thinking about Anna Karenina.
For a few moments yesterday, I pretended to be that guy and typed this into the chat bar:
“What should I think of Tolstoy’s Anna Karenina?”
This was the first sentence of the AI’s response: “That’s a wonderful question.”
Are we scared yet?
I know. You’re thinking that vapid-guy already exists. No AI hypothetical needed. But if we assume that young minds shouldn’t be asked to do the mundane things that the machine can do, we might just mass produce him. Why shouldn’t this guy ask that profoundly uninspired question of the Delphic Oracle? He’s never handwritten or typed out a passage of literature while writing an essay and in the process noticed a word choice he hadn’t thought about before, or realized the surprising switch of verb tense, or the strange way the paragraph gestures at, without quite naming, the important thing in the scene. Plus, even if he had done that mundane copying of someone else’s words and come to some realizations, the machine thinks better than he does, right? So why not let it tell him what to think?
Yes, our students are already circumventing our old-fashioned assignments. “Cheating” really is the proper word—on themselves. They’ve decided—many of them anyway—that the product, not the process, the answer not the thinking, is what matters. They get the credential either way. Not such a new idea. But the fact that students now have a tool that makes this effortless should not lead us to the conclusion that “the boring stuff” no longer matters.
What kind of life do we humans want to live? What if, instead of having the capacity for eureka!, we just get used to having the answers. What if my son had needed to consult his portable AI to find out why I was suddenly talking about mustard? He would no doubt find the answer, but would he find any pleasure? Knowing that he wouldn’t, would I have even bothered with the pun in the first place?
I suspect that the small, and sometimes large, pleasure of figuring something out, or of having an idea are foundational to our mental health, and we’re on the verge of giving that pleasure away. No. AI is not the calculator. AI invites a magnitude of intellectual passivity never before imagined.
My point is not that a world with AI leads inevitably to intellectual dystopia. My point is that undervaluing learning processes in favor of AI’s magical products might build exactly the road that we shouldn’t want to travel. The brain-numbing highway. Some research suggests that this is already happening.
So what can we do?
Niall Ferguson, a professor of history who is now at the University of Austin, has made a novel proposal for how to address the AI thinking-dilemma. He wants his university to embrace both the old-fashioned, labor-intensive intellectual work and the new, turbo-charged power of AI. Divide the day. Ban devices for a big chunk of it—enforce the ban strictly—then encourage their use in academic work for the rest of the day. He calls it the cloister and the starship.
I have an even bolder proposal. Create two AIs for the world—one that does all the things that current AI does and one that acts only like a Socratic tutor. Want to know something profound about a Tolstoy novel? Socratic AI makes you work for it by firing questions at you and requiring your responses. Want to fake an understanding of photosynthesis on an essay for school? Sorry, buddy. You’re getting a lesson that tests your understanding along the way. Kids can still have Google for encyclopedic purposes, and that will still be thorny, but make it so that connecting up with full-tilt-boogie-AI requires proof of age, like driving a car. Until then, it’s digital Socrates for you.
“Let him that would move the world first move himself,” the flesh and blood Socrates apparently said long ago.
Still good advice.

