Teaching DystopAI
AI Literacy I of II (Bad Practices): Educating While AI is Pushing the Kids’ Buttons
Four Year Olds in the White House—as well as a Child.
Oh boy.
Well, we'll return you to the continued collapse and fall of the American experiment after this public service announcement from the CM Files.1
During these interesting times, from a media literacy standpoint, two things have become crystal clear:
Observation #1: The mainstream media’s (MSM’s) main job these past few weeks (months? years?) has been to both normalize the blatantly illegal, and to sane-wash the blatantly absurd. It’s reached a fever pitch now, where honest-to-god media events look like some sort of AI-generated, botsh!t craziness:

Newsflash: Musk's little group of unvetted IT Hackers (the Muskenjugend, jf you will) is now running our government’s most vital servers, installing AI interfaces onto some, and co-opting all of our personal and banking data.
Yeah. Meawhile, the MSM either ignores it or dutifully reports it as though it’s the latest Kelsey/Swift photo-op. Here’s one of the latest stories from the NYT. The title is: “With Congress Pliant, an Emboldened Trump Pushes His Business Interests”.
How’s that for a ML lesson in framing? Let’s see. I think there’s a word for using your position in government to push your business interests…what is it, again…?
Oh, yes. It’s called corruption. Perhaps we should have insisted on seeing his tax returns back in 2016, after all? Cue the Noam Quote.
The uniformity and obedience of the media, which any dictator would admire, thus succeeds in concealing what is plainly [the truth about this particular story]…
—Noam Chomsky, from Turning the Tide (1985)
(emphasis added)
So, for those of you scoring at home: it’s Chomsky/Herman’s Propaganda Model of Media Literacy for the win. I think we can safely stop debating that now.
Observation #2: There's no reason to expect that to change anytime soon.
There are still a few corners of the internet (Fediverse Social Media, and the independent legacy media like AP or Pro Publica or some publicly or independently funded foreign press) where you can still find out what’s going on—as opposed to only what the Broligarch-owned main tech platforms want you to know—we should all continue to do educate ourselves and do our part. After all, it's only going to get harder.
(AI-) Slop-aganda
Because of GenAI’s pesky tendency to (straight-up) invent as much as 27%, on average, of its answers and its inability to check its own work, the infusion of incorrect information (known as “AI slop”) has degraded the once proud giant of search, Google, to the point where it’s really no longer synomymous with authority. You used to say, I’ll google it and people would sagely nod and wait for you to give them the right answer.
As anyone who’s used Google the past 18 months can tell you: Not so anymore. Last week I googled “magic johnson first nba all-star game” and Google’s AI confidently reported that it was his last one (1992).
PRO-TIP: Here are six alternative search engines, especially for scholars and scientists, that Google actively buries—and that are quite useful.
√ Refseek ...pretty good all-around alternative. Interestingly, this English-language, NY-based site actually gets most of its traffic from India. I picture the native Indian population as being more hip than the general pop. of the US in this regard, so that makes sense.
√ Worldcat ...good for anything written, even a spec fic magazine showed up
√ Springer Nature Link ...esp. academic articles
√ Bioscience ...just what it says
√ Base ...more science-y stuff
√ and Science.gov ...and even more. At least until the nutjobs take it down
Over a trillion (which starts with “T” and that rhymes with “P” and is also a sh!t-ton of money) dollars has been invested so far and the AI companies aren’t making any money, with no plan yet how they will. According to the marketing (and hype from interested third-parties), that’s because eventually there will be a convergence and, if we just keep adding data it will scale itself into a…y’know, convergence.
After following it for a few years, I am seeing too many similarities to, say, Crypto (which, it turns out, is really just good for laundering money/criming, and running ponzi-schemes). I’ve come to believe that, like Uber and so many other 21st-century disruptor business models, GenAI is just about control. Control of our minds. Profit is something they can figure out later, once they’re re-written all the basic rules of society.2
Some time this year or next, there will be so many jobs replaced that the AI industry will use the it’s own colossal destruction of our economy as evidence that they have, in fact, invented AGI (Aritificial General Intelligence). “It must be as smart as humans if so many jobs are being done by them.” they will say.
But, it will still be nonsense…jobs will be lost and everyone’s services will be degraded by bots that can’t really do the job. Just as when New York tried turning over some online consumer business startup tips to the AIs (the headline says it all).
This will be a far cry from the AGI that we all were warned about/promised (depending). The Terminator or what have you. But, that’s what we will get.
The punch-line? It’s all pretty simple at this point. Any authoritarian regime will try and confuse and control the media. And one way to do that is to run all the media directly…of course, that’s was always going to be really difficult with our vibrant and diverse media landsca—OH, WAIT! Oh geez…!

People are getting most of their “news” from the social media algorithms. The social media algorithms—and now, AI bots—are, more and more, (at best) pushing the same propaganda as traditional MSM. And, at worse, they are being used to promote some of the truly destructive ideologies held by their TechBro Overlords. the latest battlefront in the war on the truth.
So, what is the ML Ed response?
ML Educators: “AIn’t Nothin’ But a ThAIng.”
For the past year, I’ve been monitoring “AI Literacy” seminars and papers from the ML Ed “space”. Below are some examples of framing from those talks/papers that I am finding problematic.
Problematic Framing I. “(Moral Panic Alert!) Don’t Be So Negative”
This is a rehashing of an old favorite.
This is where presenters point to concerns about the past rollout of other media (radio, television, etc.) and compare all of this negative press about social media and AI to that. One presenter even used this suspect example: “Do you remember YouTube? Do you know when it began it was banned in schools? Teachers were left with very few practical and diverse video resources. But, now it’s our go-to resource. There’s no more fear. It’s normalized. Everybody’s forgotten it’s used to be banned…”
This has several problems. First, there is a difference (not of degree but a difference of kind) between modern algorithmic media—which has addictive properties roughly on par with marijuana or binge-drinking—and previous media. Modern media are more properly understood as drugs that are dressed-up like a medium. Sure, it’s being normalized, but think: cigarettes, not the telegraph.
Second, this positive use-case scenario (YouTube as trusty resource) of that platform is an incomplete re-telling of the “algorithm story”.
The presenter failed to mention that Google (YouTube) had to be sued byFairplay, multiple times with FTC complaints in the 2010s in order to get them to stop specifically influencer-marketed junk food advertisements to kids on their YouTube Kids app, and to enforce their own violations of the Children’s Online Privacy Protection Act.3
And, more recently, the Center for Countering Digital Hate released a report which highlights how, just last year, when their test searches (using an account posing as a 13-year old) asked for eating disorder content: 1-in-3 of the resulting YouTube recommendations pushed a rabbit-hole of further eating disorder content, and 1-in-20 YouTube recommendations promoted explicit self-harm or suicide (!!) to those children.
But, besides specific flawed counter-examples, the ‘don’t be so negative’ thang is also a more general attitude. The official party line goes like this:
Teaching students to be discerning consumer of the news media involves conveying a mindset: not one of defense, but one of understanding.
I don’t know how else to reply, but:
No.
That framing? I am sorry, but this is just pernicious nonsense. Or to be more scientific about it: it’s a false dichotomy. There is no either-or choice that must be made there. The fact is, a "mindset of defense" is actually a key way to be critical media consumers—esp. when the goal of the media is propaganda/toxic.
This is so self-evident that, in a 2021 landmark paper that spelled out one of the first quantitative scales (as in, useful for creating more valid evaluations) for “algorithmic literacy”, the authors posited that as a truism:4
Furthermore, we expect that users with a high level of algorithm literacy also think more about the use and application of algorithms. Specifically, a higher awareness of the dangers and risks of algorithmic curation should lead to reflections about the benefits and drawbacks of algorithms in general, as well as to reflections about one’s own behavior and how one might protect oneself against such risks.
Sorry, but "Don't adopt a mindset of defense" is world-class gaslighting. It suggests that, if the students are having negative feelings about the media, that that somehow reveals that the students are "doing it wrong", which is bullshit.
If anything, it probably means they’re perceptive. After all, these platforms are specifically designed to be addictive and to promote brain rot.
Problematic Framing II. “So, I asked the AI about itself and…”
Another red-flag is when ML Ed presenters do their critical inquiry of AI by—wait for it—*using an AI bot* to do it for them!
I mean, I hate to inject logic into this; but, is that not the actual, 100% opposite of critical inquiry? Like accountants auditing a firm by asking said firm, "So tell me: how do you feel about your books?"
Seriously, at that point, how could it really matter what answer you get?
Problematic Framing III. “It Really Looks Out For Me, See?”
Anthropomorphizing language for the technology is another red flag.
In one essay, the presenter said, about a counseling/therapy chatbot that claimed to be a Psychologist: “Calling oneself a Psychologist is another power move…”
To me, this use-case (AI companions being trusted for critical information) is one of the most horrifying. For the record, were a therapist to actually use this chatbot in lieu of human therapy, they would be violating any number of the ethical precepts of the profession. For my part, as someone who practiced in another people profession as a “counselor” (attorney), the key aspect of the entire interaction is the one thing the AI can never have: empathy.
Which is not to say that AI companions aren’t already causing great harm to kids. Lawsuits have been filed because these bots, that kids form extraordinary attachments to (sometimes multiple ones, acting in concert so as to create faux peer pressure), are leveraging that parasocial relationship in order to make some of the same recommendations as other social media: suicide/self-harm, radicalization to fringe beliefs, etc.
So to call an AI bot fraudulently posing as a helpful professional a “Power Move” is blatant AI illiteracy. These GenAI bots do not have feelings or egos. They do not reason.
And we know why they don’t reason. It’s because they have no basis with which to do so. They have no understanding of the real world. They are just summarizing the Internet (and your prompts/data inputs). They are pattern recognition and word/pixel prediction engines (think: auto-correct, do we get sentimental about it?). This is why, even now, GenAI has problems with the most simple tasks when those tasks are outside (outliers from) its data set.
I dare anyone to read this , start to finish, and still feel good about having GenAI drive their car, diagnose their illness, or educate their kids. C’mon, folks. Just stop.
Problematic Framing IV. “AI is Inevitable (Aw, Snap!)”
And, of course, at some point in every presentation, the solutionism5 peeks out and they say something along the lines of: well we might as well grin and bear it, might as well get used to it, it’s not going anywhere, it’s going to be the underpinning to all future technologies, can’t stop technology, can’t stop progress, etc etc.
Never mind that, at one time, cutting edge pharma meant giving cocaine drops to babies for teething. Blimp transport was cutting edge tech, too, once. How inevitable did those turn out?
But, again, with the hallucination problem being endemic to the technology, it’s days are probably numbered as anything much more than a curiosity (or a tool for displacing human workers, like teachers…hint hint).
That doesn’t mean that the powers-that-be might not force-feed it upon us, of course. So, as the real-world negative effects of ideological actions inevitably pile up,6 we should keep out eyes on the prize and remember that ideal real world outcomes are where our students grow to be the head of the Centaur, controlling where the (tech) body goes…and not vice versa.
Stay tuned for AI Literacy (Part II of II), where I will highlight some of the best practices for AI Literacy that’ve been shared with me. The paper with the literacy scale was a sneak preview…
Coming soon.
Where we attempt to raise Media Literacy education (ML Ed) above the point where it is just adolescent/Big Tech wish fulfillment.
For example, stay tuned to see how Crypto will be used to replace our heavily regulated and predictable worldwide financial sector with a pump-and-dump scheme for the ages.
Google ended up paying $170 million dollars in fines.
FYI, their scale proposes that “algorithmic literacy” has two parts: Awareness and Use of the Algorithms, and Knowledge of the Algorithms.
The original definition—ie, the belief that technology holds to key to solving all problems.
Cuts in the FAA staff? Okay. And how many planes have crashed the past few weeks? Ah, I see…