Substack is probably not going to solve academic publishing
But there are other ways platforms committed to rationality can help, if they’re willing to work with open science activists
Buried in a Brad DeLong readout of a recent Substack party in San Francisco (thanks to my dad for sending it my way) are the following three bullet points which may be of interest to academic publishing folks:
Workers for SubStack are very curious about academic publishing—a system with a very broken business model but one in which there are a large group of people who are institutionally driven to write frantically, and then find some kind of audience.
Is SubStack a way to shift from traditional academic publishing to the open web, fulfilling the university’s mission?
And nearly everyone at the event is firmly committed to the creation of a rational, accurate information- and analysis-full public sphere.
I don't know what was said other than this very terse summary. There's clearly more to Substack employees' thinking than those three bullets points. But as a long-time participant in and observer of the academic publishing ecosystem, I suspect Substack may be, like many startups before it, underestimating the complexity of the academic publishing market, and I would invite the employees mentioned by DeLong to read my 2022 post on the subject. Here's the gist, which I think holds up quite well years later:
At first glance, the staid academic publishing industry seems like a perfect fit for disruption in the form of an enterprising startup. Its total addressable market, or TAM (~$19b revenue/year), is more than big enough to support a unicorn or two. It relies on centuries-old processes based in the limitations of print that have been proven to be ineffective and inequitable. It is dominated by a few large mega-corporation incumbents who, like the newspaper industry before them, have become used to extracting enormous profit-margins for activities that produce questionable value.
Getting to that end-point is going to be far more challenging than in other, simpler markets, because academic publishing is a byzantine, multi-level enterprise-ish market. The customers who pay most of the bills for publishing directly (libraries) are not the same as the customers who pay for research (foundations and governments), who are not the same as the customers who pay researcher salaries (universities and labs). And none of the payers are the key stakeholders, the researchers themselves. Despite being different customers, their decisions are tightly coupled. Institutions who pay researcher salaries evaluate them based on the success of research that the funders pay for, which is measured as an output of the publishing that the libraries pay for.
Having said that, and drawing on my open science thread from last week, I do think Substack (and platforms like it) can play a key role in helping the already very active, thank you very much, open science movement reckon with the upstream impacts of more science being available to more people by helping provide a trusted analysis layer on top of open science. Some of the top science and technology Substacks already do this to great effect, and there is clearly more work that can be done to connect the open academic publishing movement to these layers of more public-oriented curation and analysis that are emerging on top of it.
Anyway, if you're a Substack employee or executive and want to save yourself a bunch of time and money, please get in touch to talk about this.
I didn't expect the open science thread to converge with my thread on how rational people struggle to respond to hostile institutions quite so soon, but sometimes the algorithms deliver. I just saw a LinkedIn post that, in response to OMB Director, Project 2025 Architect, and self-described Christian Nationalist Russell Vought comparing the NIH to a failed business on CNN, suggested that he read a journal article assessing the NIH's incredibly positive returns on investment. I won't link to the post because I don't know the person who made it and don't want to pick on them, but, well, that's exactly the kind of cognitive trap well meaning, rational people need to train themselves out of in this moment. If this was some random George W. Bush appointee, you might get somewhere by appealing to the evidence. I'm no Dubya apologist, but he actually did seem to care about U.S. research institutions as long as he could fund them while keeping the wealthy's taxes low enough.
Russell Vought, on the other hand, is on a very public mission to destroy the "administrative state," which includes institutions like the NIH. He knows perfectly well that the NIH is a great investment for the country and the world. He does not care about progress or science, and as a result almost any argument he makes to justify cuts to it is, by definition, going to be made in extraordinarily bad faith. We cannot keep responding to people like this with "well, actually" alone. In fact, people like him want us to get mired in a debate about the value of the NIH, because it distracts everyone from his real goal: not to make the NIH more efficient or effective, but to destroy it.
The best way to deal with bad faith actors is to discredit them, ideally ahead of time, by exposing both the fact that they're arguing in bad faith and the tactics they're using to manipulate people. Of course, open science activists also need to make the argument for the value of science to the public at large – but on their terms, not the terms of actively hostile institutional leaders.
Finally, a quick follow-up on last week's "vibe productivity" thread. A friend who works in management at a large manufacturing company noted that at big companies, the "artifacts" I accused of being simulacra of productivity could be considered a form of insurance, essentially evidence that workers are acquiring critical domain-specific knowledge as they do their productive work. He argued that replacing the workers who produce these artifacts by, you know, doing the actual work required to acquire specialized knowledge, with generative AI that produces similar artifacts but does not acquire any of the actual knowledge in doing so, could be catastrophic in scenarios where the business actually needs to access that knowledge.
I agree, and was too flippant in my blanket dismissal of these artifacts. In fact, I think they can be more than just insurance. In well run companies, the exercise of producing them also serves as a way to de-risk the production of new products and features in the first place. But I think my main point still stands: in too many companies, these artifacts have progressed from being a by-product of productivity to a symbol of it, and often have even become the product itself.
That's how you get posts like this one that's making the rounds from a product lead at Google's Gemini. It sounds reasonable on the surface. Many product organizations have already ditched the formal Product Requirements Document (PRD) in favor of more iterative, prototype-driven discovery processes precisely because PRDs have a tendency to become the output itself. But think about what “writing was a proxy for clear thinking” really means in this context. The main way to prompt an LLM to build a prototype is to, well, write prompts. And LLMs are so unpredictable that “prompt engineering” is a skill now, and PMs are finding that the most effective way to (sigh) “vibe prototype” is to…feed the LLM a really tightly written PRD.
At @Google, we are moving from a writing‑first culture to a building‑first one.
— Madhu Guru (@realmadhuguru) July 29, 2025
Writing was a proxy for clear thinking, optimized for scarce eng resources and long dev cycles - you had to get it right before you built.
Now, when time to vibe-code prototype ≈ time to write PRD,…
If Guru’s assertion is true, Google is not becoming a “build first” culture. They’re becoming a “write for bots” instead of “write for humans” culture. That’s fine. I’m actually all for using LLMs to prototype. It may be one of the only truly valuable things they do. But in confusing the work of figuring out what prototype to build for the output of a prototype, Guru is proving himself to be either producing very little actual value or, to my friend’s point above, exposing his division to a huge amount of risk by replacing product documentation written for humans with prompts.