Psychology
Leave a comment

Not AI Is the Threat — People Are

Person sitting at a table using a laptop with the ChatGPT interface open on the screen; a pair of glasses rests beside the laptop in a cushioned booth.

»I tend to think that most fears about A.I. are best under­stood as fears about cap­i­tal­ism.« When I read that line from Ted Chi­ang recent­ly, it land­ed because it pulls the mask off the mon­ster. A lot of what we call »fear of AI« is real­ly fear of incen­tives: who funds the sys­tems, who deploys them, who ben­e­fits when they scale, and who gets hurt when they fail.

Still, I don’t think »cap­i­tal­ism« is the final lay­er of the expla­na­tion. Cap­i­tal­ism doesn’t appear out of nowhere like weath­er. It’s a set of rules, norms, and defaults peo­ple agree on (or tol­er­ate) and then keep rein­forc­ing. Depend­ing on how those rules are writ­ten and enforced, you get very dif­fer­ent out­comes: extrac­tive ver­sions that squeeze peo­ple, and con­struc­tive ver­sions that build real val­ue. Either way, it’s a human project. So if we keep push­ing the ques­tion back — who shaped the incen­tives, who chose the trade-offs, who decid­ed what counts as »effi­cient« — we end up at the same place: people.

That fram­ing mat­ters because we talk about AI as if it had inten­tions. »AI reject­ed my appli­ca­tion.« »AI denied my loan.« »AI replaced my job.« But mod­els don’t wake up with goals. They don’t »want« prof­its or cost-cut­ting or speed. They exe­cute objec­tives that were set for them, inside envi­ron­ments built by humans, for rea­sons that usu­al­ly make sense to some­one with pow­er. Treat­ing AI like an actor is con­ve­nient, because it makes respon­si­bil­i­ty fuzzy. It turns choic­es into »out­comes,« and deci­sions into »the sys­tem.« And »the sys­tem« is where account­abil­i­ty goes to evaporate.

The Suspicion that Nobody is Really There Anymore

What many peo­ple actu­al­ly seem afraid of isn’t some sen­tient machine. It’s the feel­ing of being processed with­out being seen. Was my job appli­ca­tion read by a per­son — or did some screen­ing mod­el qui­et­ly drop me before any human ever formed an opin­ion? Will my cred­it deci­sion come after a con­ver­sa­tion where some­one can ask ques­tions, weigh con­text, and own the call—or will a score decide I’m sta­tis­ti­cal­ly not worth the risk? If a med­ical claim is denied, is it because a pro­fes­sion­al reviewed it and can explain the rea­son­ing, or because an opti­miza­tion pipeline flagged it as too expen­sive? In each case, the anx­i­ety isn’t »tech­nol­o­gy is evil.« It’s the sus­pi­cion that nobody is real­ly there any­more — no one you can appeal to, no one who can say, plain­ly, »I made this deci­sion and here’s why.«

That’s why this moment feels dif­fer­ent from ear­li­er waves of tech hype and tech pan­ic. It’s not that automa­tion is new. It’s not even that pre­dic­tion at scale is new. What’s new is how eas­i­ly deci­sion-mak­ing can become opaque, out­sourced, and defen­si­ble by default. You can deploy sys­tems that shape people’s lives while keep­ing the log­ic hid­den behind trade secre­cy, com­plex­i­ty, or sheer insti­tu­tion­al dis­tance. Even when there are humans »in the loop,« they can be there in name only — rub­ber-stamp­ing out­puts they don’t ful­ly under­stand, fol­low­ing pro­ce­dures they didn’t design, pres­sured by tar­gets they didn’t set. Trust doesn’t col­lapse because peo­ple sud­den­ly learned what a neur­al net­work is. Trust col­laps­es when judg­ment dis­ap­pears and respon­si­bil­i­ty goes missing.

The Danger isn’t Artificial Intelligence — it’s Artificial Responsibility

So yes: a lot of fear about AI is fear about cap­i­tal­ism — about what hap­pens when pow­er­ful actors get new tools to scale deci­sions cheap­ly, fast, and with min­i­mal fric­tion. But the deep­er fear is about humans using those tools to avoid being account­able. »The algo­rithm decid­ed« becomes a moral escape hatch. The lan­guage changes first—decisions become »pre­dic­tions,« denials become »risk con­trols,« exclu­sions become »opti­miza­tion« — and then the cul­ture fol­lows. If nobody is respon­si­ble, then nobody is guilty. And if nobody is guilty, noth­ing ever has to change.

None of this requires believ­ing that AI is inher­ent­ly bad. AI can reduce drudgery, sup­port pro­fes­sion­als, and some­times even make deci­sions more con­sis­tent than hur­ried humans do. But it can also ampli­fy bias, hard­en inequal­i­ty, and make insti­tu­tions feel cold­er and less reach­able. Which ver­sion you get is not writ­ten into the tech­nol­o­gy. It depends on what peo­ple choose to opti­mize, what they con­sid­er accept­able »col­lat­er­al dam­age,« how trans­par­ent they’re will­ing to be, and whether there are real con­se­quences when the sys­tem harms someone.

So if you want a head­line ver­sion of my view, it’s this: the dan­ger isn’t arti­fi­cial intel­li­gence. It’s arti­fi­cial respon­si­bil­i­ty. The night­mare sce­nario isn’t machines tak­ing over. It’s humans build­ing sys­tems that let them shrug and point — at cap­i­tal­ism, at the mod­el, at »the process« — while real peo­ple get sort­ed, denied, and dis­missed with no one left to answer the sim­plest ques­tion: who owns this decision?

Filed under: Psychology

by

Hello – my name is Florian. I'm a runner and blazing trails for Spot the Dot — an NGO to raise awareness of melanoma and other types of skin cancer. Beyond that, I get lost in the small things that make life beautiful: the diversity of specialty coffee, the stubborn silence of bike rides, and the flashes of creativity in fashion and design. Professionally, I’m an organizational psychologist and communications expert — working at the intersection of people, culture, and language. Alongside my corporate work, I’m also a barista at Benson Coffee — a Cologne based roastery obsessed with quality (and trophies on the side).

Leave a Reply

Your email address will not be published. Required fields are marked *