Magic Thinking and the Divine Right of LLMs
"Jesus is just AI right with me. Jesus is just AI right, oh yeah!"
We Interrupt This Regularly Scheduled Programming…
I promise to get back to some of the ongoing threads that I've started. Really. But I cannot shake the feeling that we're on the cusp of some...extreme times. And I want to do my part to try and help folks to get their heads straight in the here and now.
Plus, it is “AI Literacy Day”, I'm told. So, there you go...
Today’s theme is simple enough: Maybe let’s all stop with the magic thinking about technology.
Or at least the bad stuff.
See, I'm not here to tell you that all magic thinking is bad. No, just as with fantasy stories there is (Good) Magic Thinking and (Bad) Magic Thinking. First, a definition: Magic Thinking (in psychology) is the belief that our thoughts control the external world.
It can be profoundly superstitious, like “When I wear my lucky sweater, the home team never loses.” Or it can be a more subtle and pervasive attitude, like the pseudo-scientific belief in manifestation: “When I think, and truly believe, that good things will happen to me...they will!”
The latter has been around in popular culture for a very long time, but until recently it was sort of relegated to the foo foo, pseudo-scientific corners of the self-help world—right alongside horoscopes, healing crystals, and the amazing health benefits of turmeric.1
But as with many, many fringe ideas, there are two key things about it. First, it's had a resurgence on algorithmic social media. TikTok's “Lucky Girl Syndrome” meme/movement is a good example of how it can go wrong. The TL:DC (too long, don't care about specifics) is that girls are being told that positivity is the key to all things in life.
And the second thing about it is: there is a core truth underneath it. Studies do show that a positive attitude can be helpful toward achieving goals (mostly because it assists with confident decision-making). But this goes bad pretty quickly because, as a meme, it focuses only on one’s state of mind. But it misses other factors that are equally important, like: hard work, prioritizing personal education, and making positive relationships with others.
So, the uber-focus on state-of-mind is yet another examples of how virtual world naval-gazing is replacing actual, real world action. Staying in your bedroom and watching vapid videos like this below (for goodness sake, please don’t feel obliged to view it until the end) are no substitute for getting out there and making it happen:
Plus the backlash from failure becomes the flip-side. When bad things happen anyway, does that mean you manifested those, also?
Know All Men By These Presents

But on the positive side of Magical Thinking are when those thoughts really do manifest as reality. Take for example: Judicial Facts.
Non-attorneys won't know what this means and, when I explain it, it's not going to sound good. But hear me out.
It goes like this—If two sides are arguing in court about whether or not its raining outside, and they all call up their witnesses to testify one way or the other, at the end of the trial the Judge will make a finding. And that Judge will decide. And whatever the Judge decides, regardless of what anyone could and would see if they just looked out the window—for the purposes of that case?—THAT IS THE FACT.
Sounds crazy, right?
Take divorce cases. In hotly contested family law proceedings, you better believe plenty of facts are in dispute. People's memories can differ. People can interpret events differently. And facts can be very nuanced...both sides might have a point about what is or not, technically, “true”, regarding complex issues. So, on the “close calls”, unless someone is authorized to step in and make that decision, about what is quote-unquote fact and what isn't, you simply cannot move forward.
But people misinterpret this: The point isn’t to turn the truth into an award that is bequeathed on one side or the other based on anything other than evidence—no, it's to interpret the best evidence we have to forge a truth that both sides can use to move forward. The point of truth isn’t to avoid accountability, it is to create it.
You see, so much about society is human judgment, human connection. It's art, not science. The truth = the evidence + human connections + a narrative (what we would call our “Theory of the Case”). Typically the Court decides it likes one side’s “theory of the case” better and, before you know it…
“The Court finds it's sunny.” <gavel bangs> BLAM!…
…as everyone looks outside to see the rain.2
For me, it was always remarkable how often it worked. Years later parties’ rememberances about what the facts were tended to coalesce into something fairly close to around what the Judge had ruled had happened—a far cry from the near total disagreement they’d brought into the courtroom.
Hey Ref, Let Me Ask You—Are You Pregnant?3
Another example of positive manifesting might be Referees/Umpires calls during a sporting event.
You might not always agree with all of their calls; but, truly, you couldn't have a functioning league without them for long. Not many people reject the sport, altogether, just because they don't agree with the occasional call.
Or Governmental Agency Rulings also perform this role. And just like calls by the ump, they can be either right or wrong. For example, as Cory Doctorow points out here, when it comes to some medical advisories, they become captured by industry and are later found to be compromised and harmful to the public—like the ubiquitous prescribing of Opioids. But, OTOH, the mRNA Covid vaccines (and all the other traditional) vaccines have saved countless millions of lives for the past many decades.
THE BOTTOM LINE: Even when they are wrong, such manifestations like judicial opinions, expert opinions, and referee calls are needed—they help provide us with a shared reality that helps keep society glued together.4
The quest for the Grail is not archaeology; it's a race against evil!
—Prof. Henry Jones, “Indiana Jones and the Last Crusade” (1989)
The Holy Grail: Manifest Fairness In The Classroom
What does this have to do with AI Literacy and Education? Well, educators (especially classroom teachers) are “Refs”, too.
Ideally, students come to the teacher not just for factual knowledge, but for their judgments. For answers. For their example. For pathways to finding...The Truth. Teachers are supposed to make the call.
The point of truth isn’t to avoid accountability, it is to create it.
But, remember, we're not talking about the truth being like a gift—bequeathed like a pitcher of water, poured from the teacher’s head into the students’ heads. We're talking about The Truth being something that will largely work for everyone moving forward. It will never be perfect, certainly. But better than the chaos of letting each player decide.5
At any rate, to the extent that AI tutors are supposedly the next step in our educational system, this is going to be truly problematic.
As is becoming general knowledge, there’s no evidence that all these “AI” chatbots and image-generators (LLM generative AI models) really think, reason, or understand. Full stop. Yes, they do an impressive job of mimicking thought and reasoning. But, that's all they are doing. And recall the so-called hallucination problem. While some say that it hallucinating 20% of the time is a problem, I would add that—because it has only one use-state, so they are all hallucinations—this means that even the 80% where its guesses are correct are a bit of a problem, too. See below for an incomplete list of AI’s current ledger of harms.6
I’d Buy That For A Dollar!
(But AIs are so fun!)
Today I watched a debate between an OpenAI engineer and Professor Emily Bender, who (just by being honest about how computational linguistics works) has emerged as what's being called an “AI Skeptic”. The debate is here.
The key moment in that debate for me was when Prof. Bender did what no AI expert ever does—she used an actual definition of understanding! She defined it as “mapping from the form of language to something outside of language.”
In that context, just about any algorithmic program can 'understand'—in the sense that, with the proper technology in place, you can use language commands to make it perform an act in the real world.
But, she quickly points out that when we talk about “understanding”, we really mean a much deeper level than a simple if-then, command-comply reaction. She talks about how humans have contextual understanding that we employ when talking with other humans. So, besides the words we hear, we also use body language and our own personal experiences in the real world to achieve human communication.
She points out that really it is us, projecting our own human understanding (the way we would if it were a person) onto these amazingly realistic outputs of human mimicry, that makes as though they are alive and reasoning.
She added that, as long as the companies do not allow us to see the training data these LLM AIs are trained upon, we can't tell how much those outputs are responding to our prompts and not just adhering to similar patterns in that data.
But there's another realm in which humans throughout history have imbued a sort of cosmic understanding to opaque (and often-failing) decision-makers. Care to guess?
It’s Good To Be The King
Up until the Enlightenment, people used to simplify things by just assuming that the King was an instrument of Holy Power. The Sovereign had authority from on high. It was called authority Dei Gratia (“by the Grace of God”) or what we now refer to as The Divine Right of Kings (DRK).
I suppose one way to think of the DRK was that, while a King might make a mistake, they could never be wrong. The problem, of course is that it’s hard to admit one’s mistake and correct course when you’re never wrong. Hence the high turnover rate for Kings.
In case you’re wondering what came along to replace DRK, it was a little chestnut called HUMAN RIGHTS. And, over the centuries, as (for example) our legal system has evolved, it’s developed a mechanism by which: if things go really south, it can be corrected by a legal concept called The Absurdity Doctrine.
The absurdity doctrine is based in ancient common-law and, although it is found in case law to be used slightly differently in each state, the basic idea is this: Even when the facts and the law “require” the Court to find a certain way, it is possible for the Court to reject that holding if it would result in an absurd outcome.
The SCOTUS first recognized it in US v. Kirby, 74 U.S. 482 (1868), when it said: “[general terms of a statute] should be so limited in their application as not to lead to injustice, oppression, or an absurd consequence, and it will always be presumed that the legislature intended exceptions to its language which would avoid results of this character.”
Or to put it in terms even Grok might grok: The real world cannot be boiled down to a database. The proper actions in life cannot always be solved by an algorithm (or a prompt—which is sort of what a statute is to a Court, now that I think about it…hmmm).
Humans need to make the call.
For Life Is Quite Absurd, And Death’s The Final Word
But everyone is in a hurry to rollout AI, and the dehumanizing aspects of it aren’t equally troubling to everyone.
Here was a church in Switzerland who used an AI Jesus avatar to hear confessions. According to the report, some found it blasphemous, some found it thought-provoking. No word yet on how many feel it is goddam silly.
In Finland, another church just let the AI conduct the services. “Entertaining and fun...but it felt distant. I didn't feel that they were talking to me.” one congregant said about it.
Mm-hmm. But, of course, if you want to really connect with AI, perhaps you need to just admit your powerlessness over the technology and find religion found a religion. Consider Anthony Levandowsky, who has started the world's first Church of AI, The Way of The Future. He might be following an old playbook—one that, if it isn't ancient and holy, is at least old.
Please, mAIke It Stop
If someone wants to give your child their own AI companion/tutor—expect you child to be fed statistical guesses 100% of the time, with a certain percentage of them being wrong. It’s like taking the utter failure of Ed Tech, but delivering it with a much more slick, marketing salesbot.
Remember now, the thing about “individualized education”: each student might get a different answer, customized to what they want to hear (not what they need to hear). Everything likely will be made much easier (and therefore less educationally useful). And every movement, each moment, each lingering thought, all searches, questions, comments, conversations and queries will be captured and passed along by the AI as a piece of surveillance technology. You child will have zero privacy.
But even if none of that was true—it would still fail as a tutor, because under no circumstances will any of these tutors have the ability to utilize something like an 'absurdity doctrine' in order to check itself.
As Gary Marcus says, these technologies are “often wrong, never in doubt”.
Ah, but the rollout. As I've mentioned before, words like 'inevitable', and 'unavoidable' are essential parts of the arsenal of marketing rhetoric surrounding AI (and its evangelicals)...implying that The integration of (Gen)AI in health, education, etc. is not a matter of choice, but rather an inevitability that must be accepted and prepared for, as if it were a natural event. And the best we can do is to adapt or run for cover.
In a recent AI Literacy talk, one educator said (of students): “They will spend the rest of their lives in generative ai environments. So we need to understand AI. We need to be in the story.”
I think that kind of framing is dangerous. And it’s so misplaced, it’s not even wrong.
This isn’t the AI’s story. Education is part of the students’ story.
The students should be empowered to choose whether or not to include AI in their “Theory of the Case”.
Professor Bender ended her part of the debate with this:
I would like to leave you with the idea that nothing is inevitable, if people say 'this is here to stay we have to learn to live with it', you can say 'no'.
Refusal is really important...our systems are already creaking and about to get much, much creakier—I'm talking about education, I'm talking about healthcare, I'm talking about our legal system...all of these places where synthetic text [Note: Prof. Bender refuses to use “Artificial Intelligence” as a term because she says it's misleading] looks like a nice, handy band-aid, quick solution because there's not enough teachers, there's not enough therapists or whatever...we need to say 'no' to that, 'cause it's actually worse than nothing...
...It's not a good tool for search. Anytime somebody is saying: Oh! Here! Use this, it's 'Artificial Intelligence'…Remember, in your back pocket, you've got: “No.”
Stay safe, everyone!
Sorry, I couldn’t resist—because the bit at www.youtube.com/watch?v=0FP4m_S5bpQ is so funny to me.
Probably that one was in Philadelphia, am I right? hanh? haaaaanh? Get it? Get it? Hello, is this thing on…?!
Funniest line I ever heard from a basketball coach during a game. After the ref answered, “No. Why?” the Coach explained:
“Because you just missed two periods!”
A personal example of how you always have to check expert opinion: a large swath of the country just had some tornadoes. I grew up in that area. “Tornado-alley” it is called. And I'm old enough to remember when they told all of us kids to gather in the Southeast corner of the school-building during tornado alerts.
That is, until about twenty years later when I was a teacher. By then, they had reversed it and we were all told to gather in the Northwest corner. Along the walls. Always, along the walls. But, if either of those protocols were correct, that means that, for at least some of that time, we were told to go to the most dangerous part of the building!
For the record, I just checked and it seems everyone is supposed to be in the center, now. What can you say? Science self-corrects, but it also is always a work in progress.
That's a question I think we're all going to be revisiting over the next 4-14 years. What is better than chaos.
There’s the incompatibility of data servers that make no money but use the power of a small city with sustainable environmentalism. There was the outright thievery of the data on the front end to train the AIs. There is the lack of transparency so that no one can hold either the AI or their corporate masters accountable for what these AI chatbots are doing to us and our kids. There’s all the data they collect from users and the lack of privacy. There’s the way they dehumanize both art and learning. There is the potential for the massive job loss as these chatbots are rolled out in an effort to pretend they do as well as humans, to save corporation money. And in education, there’s the way it pre-empts learning, as kids are allowed to use these “plagiarisim machines” (Chomsky’s wonderful term) to do their thinking for them and thereofre relieve them of the need to learning it themselves over the long-term…
Contact me if you want citations on any of these, of course.