In 1440, Johannes Gutenberg finished his printing press. Within fifty years, the number of books in circulation in Europe increased from around 30,000 to roughly 9 million. Literacy rates began a long, slow rise. Access to information exploded. And a significant number of contemporary observers concluded that this was a catastrophe. Hieronimo Squarciafico worried that the abundance of books would "enfeeble" the mind. Conrad Gessner, the sixteenth-century naturalist, wrote anxiously about the "confusing and harmful abundance of books." The concern was consistent: too much information would overwhelm the memory, corrupt the judgement, and produce a population that read extensively but understood nothing deeply.
They were not entirely wrong, and they were not entirely right. And this pattern, new information technology, predictions of cognitive catastrophe, partial vindication and partial refutation, has repeated itself enough times that we should approach current versions with calibrated scepticism and calibrated concern in roughly equal measure.
What Has Actually Changed
The internet has produced a set of genuinely novel cognitive conditions. Information retrieval has become trivially easy: any factual question can be answered in seconds with reasonable reliability. This has altered the economics of memory. When retrieving a fact requires sustained mental effort, looking it up in a book, recalling it from memory, asking someone who knows, there is an incentive to retain it. When retrieval is instant, the incentive for retention falls. Research on this effect, known as the "Google effect", has shown that people are less likely to remember information they know can be easily retrieved, and more likely to remember where to find it. This is not stupidity. It is rational adaptation to changed retrieval costs.
The question is whether this adaptation costs something. The answer appears to be yes, in a specific way. Deep understanding of a subject, the kind that allows you to reason from first principles, make novel connections, spot errors in received wisdom, and generate original ideas, appears to require more than the ability to retrieve information. It requires that information to be integrated into a mental model that has been built through sustained engagement, active processing, and repeated return to the material. You cannot retrieve your way to this kind of understanding. You have to build it, and building it is effortful and time-consuming in ways that search does not replace.
The Attention Architecture
A second significant change is to the architecture of attention. Deep reading, the kind that sustained fiction, long-form non-fiction, and dense academic texts require, is a learned cognitive skill that requires sustained, focused engagement over extended periods. It is not the natural state of an information-seeking brain. It is a capacity that has to be developed through practice and maintained through continued use.
The internet's information environment is structured around exactly the opposite kind of engagement: short, frequent, varied, stimulating, and constantly rewarding novelty-seeking. The studies of reading behaviour online consistently show F-shaped or Z-shaped scanning rather than linear reading, with attention concentrated at the top of pages and rapidly declining. This is not a failure of discipline. It is a rational adaptation to an environment where most individual pages reward skimming rather than reading, because most pages don't contain sustained arguments worth following at full attention.
Nicholas Carr's concern in The Shallows, that habitual online reading might be degrading the capacity for deep reading even when we're not online, has some empirical support. Cognitive skills, like physical ones, decline with disuse. If the main practice of reading is scanning for relevant information, the deep-reading capacity atrophies.
The "Differently" Is the Important Word
The generational catastrophist framing, that the internet is making people stupid in some absolute sense, is not well supported. IQ scores, which measure certain cognitive capacities reliably, have been rising for a century (the Flynn effect) and only recently plateaued. The capacity for the kinds of reasoning that IQ measures has not declined. What may have changed is the distribution of cognitive capacities: better at parallel processing, faster retrieval, broader but shallower knowledge, worse at sustained linear reasoning, deep integration of information, and the kind of slow reflective thinking that produces genuine synthesis.
If true, this is not obviously a disaster. Different cognitive environments produce different cognitive profiles, and the profile the internet produces has genuine strengths. The question is whether the specific things that may be declining, deep comprehension, sustained attention, the ability to construct integrated understanding from extended engagement, are things that matter for the problems the future requires solving. They probably are. The hard problems, climate, governance, scientific complexity, social coordination, resist shallow processing.
We are not becoming stupider. We are becoming differently capable, and the difference is not yet clearly in our favour.
Disagree? Say so.
Genuine pushback is welcome. Personal abuse is not.
