In the last year, I estimate I’ve been asked the question “what if you’re wrong?” over 25 times. Every single time the question comes with an undercurrent of venom — the suggestion that I’m being an asshole for daring to question the wondrous AI bubble.
Every single person who has asked this has been poorly-read — both in terms of my work and the surrounding economics and technological possibilities of Large Language Models — and believes they’re defending technology, when in reality they’re defending growth, and the Rot Economy’s growth-at-all-costs mindset.
Ed Zitron on defenders.
The other day I settled in for my afternoon nanna nap on the sofa, and made the Life Choice of putting on Adam Conover’s podcast to do so. Normally a perfectly fine choice, wherein I can doze off listening to some person promote some recent book on plastic use or unionisation or some other relaxingly progressive cause. Except for This One Episode, in which Conover, an avowed slop hater, talks to a guy called Ethan Mollick.
I have to admit, I’ve never actually heard one of these “AI” grifters talk before. I have my own drugs of choice when it comes to self-harm, as we all do, but hate-listening has been something I’ve tried to wean myself off, after some flirtations with the practice in my twenties. So this was kind of my Baby’s First Exposure and, needless to say, I did not end up actually getting that nap.
Conover definitely softballed the discussion, even though it’s obvious he didn’t find Mollick persuasive. I’m sure some people will give him shit for that but, well. I do kinda get it; he’s a comedian, not an investigative journalist, and his job is mostly to get his guests talking, not challenge them on their bullshit. And he’s had on plenty of people critical of algogen and its various sins. So, okay. He gets one shill and mostly sits there, mouth all-but bleeding from how hard he’s biting his tongue, while the guy coils himself enough rope. Just don’t make a habit of it.
That whole discussion, though? Dear gods does Zitron’s quote here fucking nail the core frustration of it. “What if you’re wrong?” and “we can’t pretend this is just going away” and, bitch, why the fuck not? Or, more accurately, what a disingenuous fucking way of framing the whole discussion, in a way that’s really fucking telling of what the actual bad-faith bait-and-switch these planet-destroying grifters are pulling really is. Because all the dead-eyed platitudes of “we need to get on board to shape the AI future we really want” are very, very carefully designed to steer critics away from the actual, glaringly obvious solution.
Just ban the fucking thing.
Or you don’t even have to ban it; regulate the fucking shit outta it. Actually enforce copyright on training data. Make running any company that produces a tool that can produce deepfakes illegal; throw those fucking CEOs in fucking jail. Make its distribution a crime while you’re at it, and force tech companies to run the frontline enforcement. This is the actual “third option” between “all-in” and “ignore it and hope it will go away”; the one all Mollick’s arguments are carefully constructed to steer you away from. (And, make no mistake, they are carefully constructed; motherfucker teaches at a business school, and this shit is literally MBA 101.)
Because we can do these things; because we have done these things. To tobacco, to heroin, to CFCs, to nuclear weapons, to CSAM. We have safety standards for houses and bridges and cars and freakin’ pyjama fabric. We can do it for tech, too. In a way that’s actually effective, that actually builds safer communities and a better, more human world (and not in the way of the current crop of disingenuous, surveillance-state-in-disguise-for-the-children moral panics).
But this is actual game of grifters and shills like Mollick; to try and get you to forget a world without hyperscaled, force-fed, corporate-controlled algogen slop is possible. Will the fences we build around it be imperfect? Sure. But “we can’t do everything perfectly” is not actually an argument for doing nothing. Nor is “some billionaires might feel sad in their bank accounts.” Nor is “but China!”
You can’t put the genie back in the bottle, but you sure can sit him down in the corner and tell him to fucking behave or else.
So long as we don’t let these disingenuous motherfuckers make us forget it’s an option.
Leave a comment.+