On AI maximalism
Well, the UK – like the US – is now fully AI-pilled, with our leaders hailing AI as our economic saviour and a national security sine qua non. Most of this is posturing, not considered policy, so I felt it warrants some equally rushed, speculative reactions.
AI is the weirdest technology of my lifetime. It does mysterious things unfathomably, and whatever we expect of AI it will prove us wrong. It’s hard to predict, then, but the impacts of this maximalist turn will also be human and thus easier to pin down.
As Ben Ansell says, the UK gov looks shockingly credulous on AI. No one serious in tech (this excludes VC boosters, obvi) expects AI to double productivity in five years. But Starmer does. These will at least be great years for AI sales reps and mega-consultants. The FTSE and NASDAQ will tumesce, because they’re measures of economic activity and we’re set for tons of that. It looks like we’re locked into a phase of innovation driven by cool shit rather than user needs. ML engineers and data scientists will be even more valuable. UX designers and researchers won’t.
I initially read AI maximalism as bad news for the responsible tech/AI community. Maximalism gives bad companies an excuse to skip this ethics stuff, to label it as woke obstruction and expunge it just as they’re doing with DEI. I expect precautionary approaches and anticipating harms to fall from fashion.
But here’s the thing: without mitigation, the harms will happen. This is obviously bad from a normative, ethical point of view. But from an ugly pipeline perspective, these bad companies will still need the skills of people like us. It’ll just be more post-facto damage mop-up, more ‘let’s not do that again’.
Unedifying stuff, but it’s not all bleak: the good companies will keep responsible AI work going. They’ll know that despite the ocular cartoon-dollar-sign transmogrification all around, there’s still benefit to treating people with respect, not actually harming them, etc. I expect to gravitate further to those companies.
I suspect much will depend on the dissonance between political and public perceptions of AI. Today’s public mostly fears AI, for reasons both justified and unjustified. Placing your chips on AI despite this is the politics of paternalism: you hate this but you’re wrong, trust us while we do more of it. Clearly, this is risky (although I presume these politicians feel the precautionary alternative is riskier).
Count me among those who believe the disemployment threat is real and looming. With this acceleration this wave presumably breaks sooner, but no government or nation is anywhere near ready to handle its impacts. Unless people see AI vastly improving their non-work lives soon, they’ll only see downside, which I expect to culminate in mass protests that could sweep alternative parties (probably the far-right, but the left might have opportunities too) into power. Live by the sword.