»I tend to think that most fears about A.I. are best understood as fears about capitalism.« When I read that line from Ted Chiang recently, it landed because it pulls the mask off the monster. A lot of what we call »fear of AI« is really fear of incentives: who funds the systems, who deploys them, who benefits when they scale, and who gets hurt when they fail.
Still, I don’t think »capitalism« is the final layer of the explanation. Capitalism doesn’t appear out of nowhere like weather. It’s a set of rules, norms, and defaults people agree on (or tolerate) and then keep reinforcing. Depending on how those rules are written and enforced, you get very different outcomes: extractive versions that squeeze people, and constructive versions that build real value. Either way, it’s a human project. So if we keep pushing the question back — who shaped the incentives, who chose the trade-offs, who decided what counts as »efficient« — we end up at the same place: people.
That framing matters because we talk about AI as if it had intentions. »AI rejected my application.« »AI denied my loan.« »AI replaced my job.« But models don’t wake up with goals. They don’t »want« profits or cost-cutting or speed. They execute objectives that were set for them, inside environments built by humans, for reasons that usually make sense to someone with power. Treating AI like an actor is convenient, because it makes responsibility fuzzy. It turns choices into »outcomes,« and decisions into »the system.« And »the system« is where accountability goes to evaporate.
The Suspicion that Nobody is Really There Anymore
What many people actually seem afraid of isn’t some sentient machine. It’s the feeling of being processed without being seen. Was my job application read by a person — or did some screening model quietly drop me before any human ever formed an opinion? Will my credit decision come after a conversation where someone can ask questions, weigh context, and own the call—or will a score decide I’m statistically not worth the risk? If a medical claim is denied, is it because a professional reviewed it and can explain the reasoning, or because an optimization pipeline flagged it as too expensive? In each case, the anxiety isn’t »technology is evil.« It’s the suspicion that nobody is really there anymore — no one you can appeal to, no one who can say, plainly, »I made this decision and here’s why.«
That’s why this moment feels different from earlier waves of tech hype and tech panic. It’s not that automation is new. It’s not even that prediction at scale is new. What’s new is how easily decision-making can become opaque, outsourced, and defensible by default. You can deploy systems that shape people’s lives while keeping the logic hidden behind trade secrecy, complexity, or sheer institutional distance. Even when there are humans »in the loop,« they can be there in name only — rubber-stamping outputs they don’t fully understand, following procedures they didn’t design, pressured by targets they didn’t set. Trust doesn’t collapse because people suddenly learned what a neural network is. Trust collapses when judgment disappears and responsibility goes missing.
The Danger isn’t Artificial Intelligence — it’s Artificial Responsibility
So yes: a lot of fear about AI is fear about capitalism — about what happens when powerful actors get new tools to scale decisions cheaply, fast, and with minimal friction. But the deeper fear is about humans using those tools to avoid being accountable. »The algorithm decided« becomes a moral escape hatch. The language changes first—decisions become »predictions,« denials become »risk controls,« exclusions become »optimization« — and then the culture follows. If nobody is responsible, then nobody is guilty. And if nobody is guilty, nothing ever has to change.
None of this requires believing that AI is inherently bad. AI can reduce drudgery, support professionals, and sometimes even make decisions more consistent than hurried humans do. But it can also amplify bias, harden inequality, and make institutions feel colder and less reachable. Which version you get is not written into the technology. It depends on what people choose to optimize, what they consider acceptable »collateral damage,« how transparent they’re willing to be, and whether there are real consequences when the system harms someone.
So if you want a headline version of my view, it’s this: the danger isn’t artificial intelligence. It’s artificial responsibility. The nightmare scenario isn’t machines taking over. It’s humans building systems that let them shrug and point — at capitalism, at the model, at »the process« — while real people get sorted, denied, and dismissed with no one left to answer the simplest question: who owns this decision?
