In 2018, a team of researchers at Facebook discovered that its recommendation algorithm was consistently amplifying outrage, divisive content, and misinformation because that content drove more engagement. They wrote a report. The report sat in internal review for years. It was not acted on. The algorithm remained. The researchers who flagged it eventually left. The people who made that decision, to keep the algorithm and bury the findings, were not artificial intelligences. They had names, salaries, and mortgages. Several of them have given talks since about responsible technology.
This is not an isolated anecdote. It is the pattern. When you trace back the actual harms that have followed from technology in the last two decades, the surveillance, the election manipulation, the teenage mental health crisis, the amplification of extremism, you arrive, reliably, at humans who knew what was happening and chose not to stop it. The AI was a tool. The decision was a business one.
What AI Actually Is
Current AI systems do not have goals of their own. This is not a comforting platitude, it is a technical description of how they work. A large language model is a statistical machine that predicts plausible continuations of text based on patterns in its training data. It does not want anything. It cannot want anything. It produces outputs that can be harmful, biased, or misleading, but "harmful intent" is a category error when applied to a system with no internal states beyond activation values.
The harms that AI currently produces are almost entirely harms of deployment: who chooses to deploy a system, for what purpose, with what safeguards, against whom, and with what oversight. These are human decisions. They are made by specific people in specific organisations with specific financial interests. When a hiring algorithm systematically disadvantages candidates from certain postcodes, that isn't an AI going rogue. It's a human who specified what to optimise for and another human who decided not to audit the results.
This matters because fear of the wrong thing produces the wrong response. If you believe the danger is in the AI itself, some emergent malevolence in the weights, you look for technical solutions: better alignment research, more cautious deployment, smarter guardrails. These are not bad things. But they are the wrong primary focus if the actual danger is that companies are deploying systems for profit, cutting corners on safety, capturing regulators, and then using the complexity of AI as cover for decisions that are, at their core, straightforward commercial choices.
The People Who Built the Surveillance State
Consider what has already been built. Governments and private companies have access to detailed records of where you go, who you speak to, what you buy, what you read, how long you look at things, what you search for when you think no one's watching, your health data, your financial data, and in some jurisdictions, your face. Most of this data was collected legally, or in a legal grey area, with terms of service that ran to thousands of words and consent mechanisms designed to minimise the friction of agreeing while maximising the friction of declining.
The people who built these systems are very much alive and, in many cases, celebrated. They spoke at conferences about connecting the world. They wrote op-eds about democratising information. They received honours. The infrastructure of surveillance they constructed, which in any honest accounting represents one of the largest transfers of power from individuals to institutions in recorded history, was treated as a product success story. Nobody is in prison for it. Most of them are quite wealthy.
The question is not whether powerful technology can be misused. It obviously can. The question is whether the people controlling it have interests that align with yours. The historical record on this is not encouraging.
The Future Risk Is Also Human
This is not an argument that AI poses no long-term risk. Sufficiently capable autonomous systems that can set and pursue goals, self-improve, and operate at a speed and scale that outpaces human oversight would constitute a genuinely novel threat, one that isn't reducible to human decisions in the ordinary sense. That risk is worth taking seriously. Most serious AI researchers do take it seriously. The precautions they advocate are sensible.
But that risk is speculative and future-tense. The risk of humans using AI to concentrate power, undermine accountability, automate discrimination, and monetise psychological vulnerability is present-tense and demonstrated. Both can be true. The problem is that focusing exclusively on the science-fiction version provides a very convenient distraction from the things that are happening right now, to real people, with tools that were built by identifiable individuals who made specific choices.
The AI isn't the danger. The business model is.
Disagree? Say so.
Genuine pushback is welcome. Personal abuse is not.
