In 1839, when photography was new and portraits were expensive, people sat very still for the camera and looked serious. Early photographs of ordinary people have a strange, composed quality, subjects appear more solemn, more formal, more considered than anyone actually is in daily life. They knew the image would exist, be seen, be permanent. They performed accordingly. The photographs tell us something real about those people, and something real about what people do when they know they're being recorded. Those are related but different kinds of truth.
Large language models are trained primarily on text produced by humans who knew, or at least suspected, that what they were writing might be seen. The result is not the best version of humanity, not the average version, but the performed version, and that distinction matters more than it first appears.
What the Training Data Is
The training corpus of a large language model is not a random sample of human thought. It is a sample of human thought that was written down and made available, which selects heavily for a specific type of thinking. Academic papers, books, journalism, forum posts, social media, Wikipedia: all of these are produced in a context where the author knows they are communicating to an audience. The private thoughts that were never written, the conversations that happened and left no record, the everyday practical reasoning that guides most of human life without ever becoming text, none of this is in the training data.
This creates a specific distortion. The performed version of human thought is more certain, more articulate, more organised, and more confident than ordinary human cognition actually is. People who write publicly are selecting for ideas they want to defend, positions they're willing to attach their name to, formulations they've been able to get right enough to publish. Private thought, the tentative, contradictory, self-undermining, half-formed nature of most actual cognition, is systematically absent.
The Best-of Argument
There is a version of the argument that AI reflects the best of humanity: it has access to the accumulated written knowledge of centuries, the great works of literature and science and philosophy, the clearest thinking that has been done on most subjects. In this sense, a language model trained on the world's written output knows Aristotle and Darwin and Shakespeare and Einstein. Whatever intelligence it demonstrates is built from the highest achievements of human thought.
The problem with this argument is that knowing the output of great thinking is not the same as the process that produced it. The texts are there; the struggle, doubt, false starts, and revision that generated them are not. A model that can reproduce Darwinian reasoning in articulate prose is not demonstrating the capacity for observation, anomaly-detection, and sustained conceptual revision that produced the theory of evolution. It is demonstrating the capacity to pattern-match on its outputs. These are related capacities but they're not the same one, and confusing them is a category error about what intelligence actually is.
A language model that has read every great novel ever written can produce text that resembles great novels. It cannot demonstrate what great novels actually demonstrate, a distinct sensibility, shaped by a specific life, responding to the world in a way that nobody else could have responded to it. The outputs can be similar. The source is entirely different.
The Average Argument and Its Problems
The more common claim, that AI is a statistical average of human output, is also misleading, because averages flatten the variance that is actually interesting. If you averaged all the music ever recorded, you would get something that sounds like no music anyone would actually want to listen to. Averages eliminate the outliers, and in creative, intellectual, and ethical domains, the outliers are the thing. The average response to a moral dilemma is not the morally interesting response. The average aesthetic preference doesn't produce art. The averaged human is nobody in particular, and nobody in particular hasn't done anything worth attending to.
What AI actually reflects is the modal human output in specific domains, the most common formulations, the most frequently expressed views, the most typical ways of framing things. This is a specific kind of representation that is simultaneously very broad (it covers a lot of territory) and oddly narrow (it covers the most common versions of all of it). The distinctive, the eccentric, the deeply personal, and the genuinely original are all underrepresented relative to their significance.
AI is a mirror of what humans produce when they perform for each other. What it's missing is everything that happens when they stop performing, which is most of what actually makes them interesting.
Disagree? Say so.
Genuine pushback is welcome. Personal abuse is not.
