Skip to main content

The Big Promise, revisited

A year+ into the AI boom, what has the tech industry learned about manipulating society with the promise of existential advances?
Published onDec 08, 2023
The Big Promise, revisited
1 of 2
key-enterThis Pub is a Reply to

Several months ago1, I wrote about what I cheekily called “the Big Promise,” how the tech industry is shifting from justifying its enormous valuations with society-changing technologies that never seem to pan out (AI assistance, self-driving cars, death cures, etc.) to justifying its enormous valuations on new technologies that can replace society entirely (AI overlords, the metaverse, web3, etc.). I also pondered whether Neil Postman’s 1990 declaration on the fundamental uselessness of computers still holds up in the LLM era, or whether the latest AI boom will finally give computers agency to do something of value without human intervention. In recent weeks, the two thoughts have been interacting in interesting ways.

For example, here’s a new Bloomberg story about Presto, a fast food ordering “AI” company, getting in trouble for revealing that, in fact, humans mostly or partially handle up to 70% of its orders:

Presto has said those off-site workers are an added expense that will keep rising as it expands to more locations. Still, the company, which has posted operating losses since going public last September, expects the need for humans to ease as its system gets smarter. OpenAI might help it get there, with Presto saying its development platform will improve its drive-thru chatbot on several fronts, including sounding more natural.

Yup, Postman still looks pretty good. But the Presto story, and others like it, is also a sign that society is catching on to the absurdity of tech’s habit of justifying its value on the promise of future advancements. It’s no different from how Uber, for example, burned billions of dollars on the promise that it would become profitable by investing in driverless cars, before potentially achieving profitatibility in reality by cutting costs and diversifying — but not before clogging city streets worldwide and lobbying to stop public transit projects across America. This time, regulators and investors caught on a little bit earlier.

Yes, I’m nit-picking from one example, and yes, there are certainly valuable uses of LLMs arising. The question I’m asking is: how valuable are they, really?

So far, about as valuable as Postman might have predicted: marginally, and with significant tradeoffs. Even the most optimistic studies commissioned by the companies with the most to gain from LLMs, like Microsoft, are finding that their value comes mostly from gains in worker efficiency. Efficiency gains are certainly valuable, and they’ll likely increase from the 18% figure cited in the study over time. But efficiency gains are not the kind of world-changing intervention that justifies an $80 billion valuation on $1b of total revenue (and “unknown,” read, negative net revenue).

To justify that kind of money in today’s high-interest rate environment, after decades of companies burning investors with the promise of exponential leaps just around the corner, you need a better business plan than “don’t worry, we’ll R&D our way to profitability once we’ve captured the market.” You need a Big Promise, like, I dunno, that you’re creating an AI overlord that will eventually solve all of our problems for us. And then, to keep those investors on the hook when your actual product releases prove disappointing at scale, you need to make sure that everyone always believes you’re just a few steps away from that Big Promise, preferably by generating a series of irresistibly juicy stories that lead to international headlines.

Ahem.

Two weeks ago, just after Sam Altman was reinstated as OpenAI CEO after being unexpectedly fired by its board, an event that prompted a global media circus, The Information and Reuters reported that his ouster was due, in part, to warnings from employees about the dangers of a new breakthrough from the company, enigmatically named Q*. Sources have since denied that the board was even aware of Q* when it fired Altman, but that hasn’t stopped the rumor mill from concocting wild theories about what looks likely to be a modest advance based on work that Google has already published.

I understand that these hype cycles are just the way business works in 2023. Eventually, the true value of these technologies will be realized, winners and losers declared, and we’ll move on to the next Big Promise. So why pay so much attention?

The danger of the Big Promise isn’t the money and time wasted. It’s that in an era of increasingly dysfunctional global information networks and regulatory regimes, extremists are learning to weaponize the combination of social networks, techno-utopianism and political polarization to influence public opinion and policy at scale.

Again, Postman had the answer all along: “In a world populated by people who believe that through more and more information, paradise is attainable, the computer scientist is king.”

In July, The New Republic published a fascinating article making the case that at some point, tech’s bold promises to build self-driving cars and next-generation mass transit like the Hyperloop transformed from sincere R&D efforts to cynical strategies to convince policymakers and the public to forestall conventional government action to build public transportation — all while those same companies began to profit not by delivering on their promises, but by ladening car-buyers with subscription-based add-on services and even mining the data that modern, ultra-connected automobiles generate. Learning from those successes, folks like Elon Musk and the Koch brothers are now actively pursuing this strategy of using tech hype to deter much-needed government action in other realms, as well.

The Presto story contains a glimmer of hope: we finally seem to be developing a societal BS detector for those types of promises. As big tech escalates from society-changing to society-replacing promises, however, the stakes are becoming much higher, and it’s important that we don’t repeat the same mistakes. Given the attention our policymakers are already paying to AI, and how much of society its promoters promise it will impact, we should be just as cautious about how the powerful might manipulate our dreams and fears about AI to serve their ends as the technology itself.

Comments
0
comment
No comments here
Why not start the discussion?