
Five years from now, the morning bell still rings in American schools, but it no longer means what it once did. It does not summon students into learning so much as into access, where students login, sync, prompt, generate, revise and submit.
The rituals of school remain cosmetically intact. Backpacks still thud against desks. Teachers still greet students at the door. Whiteboards still glow. Chromebooks and tablets still display the same small electronic sigh. But beneath that familiar choreography, something more consequential has shifted. In classrooms across the country, students have become increasingly fluent in the language of artificial intelligence while growing less secure in the foundational languages of human thought, close reading, sustained writing, mental computation, verbal reasoning, and the slow, frustrating, irreplaceable act of wrestling with not knowing.
In this version of America, AI literacy did not merely join traditional literacy; it leapfrogged it.
Students can prompt with polish. They can summarize a chapter they did not fully read, brainstorm an essay they did not fully think through, and solve a problem they do not fully understand. They can produce in seconds what once required minutes and in minutes what once required struggle. In a culture increasingly tempted to confuse speed with mastery, this all looked like progress, until it didnāt.
Somewhere in this near-future landscape, Jonathan Haidt sits at his desk writing the sequel (or perhaps by then the threequel) none of us should want him to write. The first time, in The Anxious Generation, he documented what happened when we handed childhood over to social media and then acted surprised when attention fractured, mental health declined, and adolescence became a public health concern. This time, in the imagined sequel, The Artificial Generation, the story is not primarily about anxiety. It is about erosion, cognitive erosion, academic erosion and workforce erosion. The gradual normalization of outsourced thinking in a nation that had already begun softening its educational foundations before inviting one of the most powerful cognitive tools in modern history to roam essentially unguarded.
And the most tragic part is that the warning signs are not speculative, they are already here.
America is not entering the age of artificial intelligence from a position of educational strength. We are entering it while foundational academic performance remains fragile, especially in reading. The nationās own report card has been sounding the alarm. In 2024, average NAEP reading scores fell by 2 points for both fourth- and eighth-graders compared with 2022, leaving both grades 5 points below 2019. The National Assessment Governing Board said plainly that there has been āno nationwide reboundā in reading. It also called the decline a ādirect and urgent threat to our collective future.ā
The details are even more unsettling. In 2024, about 40 percent of fourth-graders scored below NAEP Basic in reading, the highest share since 2002. Roughly one-third of eighth-graders scored below Basic, the highest share ever recorded for that grade. Grade 12 reading was also 3 points below 2019, and the average score was 10 points lower than in 1992. These are not decorative warning lights on the dashboard, they are the dashboard!
Math offers less comfort than some of our public conversations suggest. Fourth-grade math rose modestly from 2022, but remained below 2019. Eighth-grade math was flat compared with 2022 and still sat 8 points below 2019. At grade 12, math in 2024 was 3 points below 2019 and also 3 points below 2005, the start of the current trendline. Recovery, in other words, has not arrived wearing a cape, as it has barely limped into the room.
That matters because a nation with strong literacy, strong numeracy, and strong habits of reasoning can introduce powerful tools from a position of resilience. A nation with weakened reading performance, incomplete math recovery, and widening academic gaps is introducing those same tools into a developmental ecosystem that is already compromised. If social media arrived in the middle of a mental health vulnerability, AI is arriving in the middle of a cognitive one.
This is where the temptation of technological triumphalism becomes especially dangerous. In the United States of Algorithmia, my shorthand for a nation too intoxicated by innovation to notice what it is quietly displacing, we are at risk of making the same mistake twice. First, we allowed digital platforms to colonize childhood before we understood the cost. Now, with generative AI, we are in danger of allowing machine-mediated cognition to colonize learning before we have built adequate guardrails around development, dependency, and educational purpose.
To be clear, this is not an argument that AI has already caused the NAEP declines. The evidence does not support that claim. The sharper argument is the more troubling one, where we are rolling out a tool that can reduce cognitive friction at exactly the moment when too many students most need more practice with productive friction, not less. We are normalizing a technology that makes outsourcing easy in a country whose academic foundations are already showing signs of strain.
There is a difference between assistance and offloading. There is a difference between a tool that supports thinking and a tool that slowly displaces it. Helping a student revise a paragraph is not the same as training a student, subtly but steadily, to consult the machine before consulting himself. The prior is scaffolding, while the latter is substitution in a three-piece-suit. That is the line we are in danger of crossing.
UNESCO (United Nations Educational, Scientific and Cultural Organization) saw this risk early and urged governments to regulate generative AI in schools, recommending an age limit of 13 for classroom use and calling for stronger policy frameworks around privacy, ethics, and teacher training. Its guidance was not anti-technology, bit more so pro-childhood. It was a reminder that developmental timing matters, and that powerful systems should not simply be dropped into the daily lives of minors under the confetti cannon of innovation.
Yet in the United States, we have largely proceeded through a patchwork of enthusiasm, improvisation, and wishful thinking. Districts race to produce AI guidance, while companies race to expand AI access, and adults race to sound future-ready. The phrase āAI literacyā is now recited with such reverence that one begins to wonder whether literacy itself has started to feel old-fashioned, like cursive with Wi-Fi.
But literacy is not old-fashioned, numeracy is not quaint, and attention is not obsolete. Memory is not a bug, cognitive stamina is not a charming relic from a pre-digital age, as these are the load-bearing beams and infrastructure of learning. They are the internal architecture students need if they are ever to use AI wisely rather than merely depend on it efficiently.
If the current academic trajectory continues, the warning flare becomes brighter. Between 2019 and 2024, both fourth- and eighth-grade reading fell 5 points nationally, while eighth-grade math fell 8 points. That does not authorize a prophecy, but it does justify a scenario. If a country already struggling to restore foundational skills begins treating AI-generated output as a routine substitute for effort, then the next five to ten years may not produce a dramatic collapse so much as a quiet hollowing-out. Fewer students reading deeply, writing independently and calculating with confidence. This leads to more polished outputs, with less internal ownership, yet ironically more assistance and less true authentic agency.
That is the true dystopian turn, not the cartoon comical Simpsonās version where students forget how to hold pencils and robots hand out diplomas. The more plausible version is sadder and more bureaucratic, where students āappearā capable, with their outputs look increasingly refined as the software functions excellently, but in reality their confidence is inflated by this external support. Underneath, too many are losing fluency in the invisible work of thinking, where a paragraph is generated, or the answer is supplied, with an explanation polished, with the cognitive reps never happening.
Social media hijacked attention, where Generative AI, if left developmentally unguarded, risks hijacking effort.
And once effort itself begins to feel optional, the erosion does not remain confined to test scores. It shows up in voice, in judgment, in resilience, in the ability to improvise without a digital crutch, and in the capacity to hold complexity in oneās own mind long enough to do something meaningful with it. That is not merely an academic problem. It is a civic, workforce, and human development problem.
Because the workforce, meanwhile, is not standing still while schools sort out their feelings.
Here the irony grows sharper. We are teaching students to become more fluent with AI at the very same moment the labor market is beginning to reshape itself in ways that make many entry-level human roles less available, less stable, or less necessary. This is not a trending story of all jobs disappearing, but more unsettling than that is a story of the bottom rungs of the ladder shifting while students are still trying to reach them.
The World Economic Forumās Future of Jobs Report 2025 projects that by 2030, macrotrends will create 170 million jobs and displace 92 million, for a net increase of 78 million jobs globally. That sounds reassuring until one reads the fine print. The same report describes significant disruption, with clerical and administrative roles among those expected to decline and with employers increasingly prioritizing analytical thinking, resilience, flexibility, and AI-related skills. In labor-market terms, the ground is not vanishing; it is moving fast.
Federal Reserve research adds a colder note. A March 2026 Fed summary highlighted findings that entry-level employment declines are showing up in occupations where AI primarily automates work, while more experienced workers in those same fields are more insulated, meaning the veteran on the ladder may be fine, where the beginner looking for the first rung may not be.
AI adoption itself is not theoretical anymore. A Federal Reserve note from April 2026 reported that about 18 percent of firms had adopted AI by the end of 2025, with broader surveys suggesting even wider exposure through workersā firms and tools. The technology is not patiently waiting outside the schoolhouse while committees finish their slide decks. It is already in the economy, already in workflows, already remapping what counts as routine human work.
And yet this is where the story becomes more nuanced, not less. AI can absolutely increase productivity and create value. PwCās 2025 Global AI Jobs Barometer found that AI is linked to a fourfold increase in productivity growth, that workers with AI skills saw a 56 percent wage premium in 2024, and that the skills employers seek are changing 66 percent faster in jobs most exposed to AI. That is real, and it matters being that it is not the voice of panic ⦠moreso the voice of transition.
But that productivity upside strengthens the argument for guardrails rather than weakening it. The labor market is not simply rewarding access to AI. It is rewarding workers who can pair AI fluency with judgment, discernment, adaptability, and higher-order thinking. In other words, it is rewarding precisely the internal capacities that schools risk underdeveloping if AI use becomes a substitute for foundational cognition rather than a supplement to it.
That is the bitter symmetry of the Artificial Generation. We may be preparing students to use the very systems that are simultaneously dissolving the entry-level roles that once trained human beings into adulthood. We may produce graduates who are elegant at prompting and brittle at thinking, fluent in interfaces and underpowered in judgment, highly practiced in tool use but less practiced in the invisible, muscular work of human problem-solving that increasingly distinguishes valuable workers from replaceable ones.
This is why the debate cannot be reduced to whether one is āpro-AIā or āanti-AI.ā That framing is as childish as it is convenient. The real question is whether we are willing to govern a cognition-altering technology with the seriousness that childhood deserves.
NISTās AI Risk Management Framework and its Generative AI Profile exist because these systems carry meaningful risks that must be identified, measured, and managed. NIST is explicit that the framework is voluntary and intended to help organizations incorporate trustworthiness into design, development, use, and evaluation.
So what would seriousness look like?
It would mean age and developmental guardrails, not generic access under the banner of modernization. It would mean clear school policies that preserve human-only zones for reading, writing, speaking, and problem-solving. It would mean transparency for families about when AI is being used, how it is being used, and what must still be learned through direct human effort. It would mean assessment models that prize thinking, not merely output. It would mean national standards for child-facing AI products that are closer to aviation than app-store vibes.
Most of all, it would mean resisting the seductive lie that every frictionless advance is educational progress. Some friction is not failure, it is formation. The pause before an answer, labor of drafting a sentence and the mental stretch of solving before searching are not inefficiencies to be engineered out of childhood. They are how childhood becomes capability.
We have already lived through one era in which adults mistook technological novelty for developmental harmlessness. We handed children social media, screens and congratulated ourselves on connectivity, and then watched as anxiety, fragmentation, and performative identity surged across adolescence. We do not need another national postmortem, where this time its on cognition.
The goal is not to prevent artificial intelligence as that would be foolish and counterproductive. The goal is to prevent the artificial generation.
The goal is to make sure AI literacy does not come at the expense of literacy itself. That efficiency does not replace effort. That augmentation does not become substitution. That the race for global dominance in machine intelligence does not quietly overrun the developmental dignity of childhood in the process.Because if we fail, then five years from now Jonathan Haidt may indeed find himself writing The Artificial Generation. And when that day comes, the most painful part will not be the title. It will be the recognition. The evidence was visible. The students were vulnerable. The guardrails were possible. And once again, the country moved faster than its wisdom.
