This week, news broke that OpenAI, waltzing right on past last month’s global melodrama that was the unexpected firing and rehiring CEO Sam Altman, is looking to raise additional funding at a $100 billion valuation, or roughly 100 times annual recurring revenue. That earnings-to-valuation multiple isn’t unprecedented for high-growth SaaS startups, but it’s at the high end of the curve, especially in 2023’s challenging market, and for a startup with atypically large (and fixed) operating expenses that almost collapsed last month.
As my seven regular readers will know, I’ve spent the last few sporadic posts arguing that AI is not nearly as valuable as we are being lead to believe while positing that the real risk of AI is how its proponents will use our perception of it to remake society for their own purposes.
Last week, Dan McQuillan, lecturer in Creative & Social Computing at UCL Goldsmiths and author of Resisting AI, wrote a much more eloquent version of the argument. It’s worth reading in its entirety, but here’s the key bit:
The real issue is not only that AI doesn't work as advertised, but the impact it will have before this becomes painfully obvious to everyone. AI is being used as form of 'shock doctrine', where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity to transform social systems without democratic debate.
“Shock doctrine” refers to activist Naomi Klein’s 2007 book that argued that crises such as the Iraq War have been used, or sometimes even deliberately incited, to quickly advance unpopular policies without democratic resistance. McQuillan’s extension of the concept to the tech industry’s obsession with AI feels apt. Just as false “intelligence” during the buildup to the Iraq War was used to curtail civil liberties indefinitely at home and declare permanent war abroad, mere predictions of AI’s potential — made by the very people who stand to benefit the most from inflated perceptions of the technology — are being used to justify a furious torrent of AI policymaking across the globe.
I would argue that Klein’s concept further explains why what I’ve been calling the “Big Promise” — the shift from big tech promising to fix society to big tech promising to replace society — has become necessary. After big tech’s decades of failures to meaningfully fulfill its already audacious promises, society now requires a much bigger shock to compel it to act. Hence, as McQuillan and others have pointed out, the industry’s constant scaremongering about the hypothetical utopian/apocalyptic potential of AI superimposed with the big tech’s near-total lack of concern for AI’s already extant harms.
As McQuillan notes, though they wrap their work in protectionist language, in practice many policymakers seem all too keen to play along by selling so-called “responsible” AI to the public as a way to revitalize broken civil society, as if sagging bridges can be repaired by throwing computation at them:
The Prime Minister says he will "harness the incredible potential of AI to transform our hospitals and schools" while ignoring leaking roofs in the NHS and the literally collapsing ceilings in local schools. This focus on the immaterial fantasies of AI is a deliberate diversion.
So far, the US’s regulatory response has been much better received by tech activists than the UK’s. But McQuillan’s observation that the UK government is using the unfulfilled promise of digital technology to shirk responsibilities in the physical world is fascinating because it contains shades of what I wrote about two years ago in reference to web3:
…the point of web3 isn’t to interact or reach parity with the physical world, it’s to create a new one where only actions and items that can exist on the blockchain are valuable.
AI is not crypto in that it has proven value in areas like drug discovery and climate modeling. But so far, the main impact of the mainstreamification of AI via large language models has been to justify layoffs and to further concentrate power in big tech companies. McQuillan calls this phenomenon “algorithmic Thatcherism”
Real AI isn't sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations.
Remember how web3/the metaverse promised to trap us all in the gig economy, privatize currency itself, and, as I mentioned above, flatten human relationships into events that can be represented on the blockchain? So far, the AI hype wave is leading us down almost exactly the same path,1 just much more effectively.
The only question that remains is how far down it we’ll follow.