The conversation about AI tends to oscillate between two positions. One holds that AI will replace human intelligence in most domains, rendering most human cognitive work obsolete. The other holds that there is something irreducibly human about thought, and that AI will forever remain a sophisticated tool rather than a genuine mind. Both positions are more confident than the evidence supports.
A more useful question is not which is better, but what each actually does well.
What AI does well
The most distinctive feature of current AI systems is breadth. A large language model has been trained on a substantial fraction of recorded human knowledge, not deeply, but widely. It can move between domains with an ease no human generalist can match, because it hasn't had to spend time acquiring each domain. It doesn't get tired, doesn't carry its morning into the afternoon, and has no stake in being right. The absence of ego, in particular, is underrated: AI will tell you your argument has a problem without managing your feelings about it.
AI is also extraordinarily consistent in a narrow sense. It applies the same process to the ten-thousandth query that it applied to the first. This is valuable in domains where variance is costly, legal review, medical screening, quality control. A human expert might be sharper on a good day, but AI doesn't have bad days in the relevant sense.
What humans do well
Humans have experience, not just information about experience but the having had it. This is not a sentimental distinction. Experience produces a kind of knowledge that is not available in text: how decisions actually feel under uncertainty, what it is like to have been wrong in a particular way and survived it, how much the stakes of a situation change your perception of it.
Judgement, in the sense of knowing what matters in a specific situation with all its particularities, is still largely a human capability. AI can offer analysis but tends to treat situations as instances of categories. Humans can perceive when a situation is anomalous in a way that the category doesn't capture. This may be a solvable problem for AI. It is not currently solved.
What the combination produces
The genuinely interesting case is not AI versus humans but the combination. An analyst working with AI that has read everything written about the field moves faster, covers more ground, and catches more blind spots than either alone. A doctor with AI flagging patterns in imaging data is better at the diagnostic task than the doctor without it, not because the AI is the doctor, but because the combination produces a different cognitive process.
The failure mode on both sides is the same: overconfidence. Humans who ignore AI because they are proud of their expertise miss the breadth. AI deployed without human judgement misses the context. Neither alone produces wisdom, which tends to require the slow accumulation of getting things wrong in ways that matter, something that, for the time being, remains a distinctly human speciality.
Disagree? Say so.
Genuine pushback is welcome. Personal abuse is not.
