Why traditional startups struggle to disrupt the academic publishing industry.
Well, some people read my last blog post about startups and academic publishing, which doesn’t usually happen. Thanks to everyone for reading, and for providing awesome feedback on Twitter.
I’m not active on Twitter for a variety of reasons, mental health and not wanting to do free work for surveillance capitalists chief among them. But I’d like to meaningfully engage with, and give folks credit for, their helpful feedback. Plus, I realized it served as a great opportunity to experiment with the very new models of evaluation we need to see become mainstream in a low-stakes environment.
So, taking a page from Arcadia Science, I’m using this post to:
Collect and acknowledge the most useful feedback on the original post.
Respond to it where needed.
Give authors of helpful feedback credit for their thoughts as contributors to this Pub.
Using our Connections feature, make this Pub a "Reply” to the original article so it’s easily findable from the original.
Mint DOIs for the original piece (https://doi.org/10.21428/43418882.9eeb0e4a) and add a reply relationship to this piece (https://doi.org/10.21428/43418882.d60133f0), so that contributors have a citable piece they can add to their ORCID (automatically if setup) and CV. Once updated, the relationship will also appear in the public Crossref API, allowing it this reply to be discovered and surfaced alongside the original.
Because this wasn’t an academic piece, I’m not going to mint a DOI for either the original post or this pseudo-review (unless folks think I should/want to cite it, I guess?). But if it were, and I did, it would mean everyone who contributed feedback would have a citable piece that would automatically go on their ORCID profiles (if accepted) and that they could put on their CVs.
Finally, I want to acknowledge a few shortcomings/open questions:
This is too hard to do on PubPub right now from a UX perspective. Among other things, we should make adding connections as simple as adding a link (pretty easy, on the roadmap), and we should be tracking tweets about Pubs and making it simple to pull them into Pubs and automatically (much harder, probably requires special Twitter API access and lots of funding) and link them to ORCIDs (maybe their API can be expanded to include search by “personal websites”?). Also, we should add the ability to Tweet out comments (and put them on other social networks).
Is it a good thing for the author to curate their own feedback? I’m certainly incentivized to leave out really negative comments. On the other hand, people can show up here and reply inline, or make a lot of noise on Twitter and elsewhere (or their own linked Pubs) if I really act in bad faith. But we could maybe use another layer of evaluation where someone trusted steps in and says “Gabe did a good job summarizing the feedback.”
What roles should contributors to posts like these have? I invented the “Feedback” role for this because the CRediT Taxonomy we support, as well as the other Crossref/Bibtex roles we support natively, don’t really offer a good fit.
How should we include the tweets themselves in the feedback graph I’m creating? Make each one a connection that gets deposited? Rely on Event Data? DocMaps? Relatedly, how can we archive those Tweets so if they’re deleted or otherwise disappear, the scholarly record remains in tact?
How to ask for permission to include people as contributors at scale? (Note: if you want your name to be removed from the contributor list, please let me know).
How to automatically notify people that they’ve been included in something like this (probably via Twitter — would happen via ORCID if I deposited this).
What connection type should this Pub have? It’s clearly not a “review” in the sense that Crossref or Datacite means it, but it’s not just an author response, either. It’s almost like a literature review of feedback — a “feedback review” or something.
I’m sure new helpful feedback will come in over time. I can always create a new release, but it would be nice to have more of a sense via the UI that the Pub is meaningfully updated.
What should we title these things? There’s gotta be a more interesting/variable format than ‘Feedback on X’, surely.
Please comment on the above open questions if you have any meta-feedback.
If I have missed any useful feedback you saw on Twitter or elsewhere, please let me know by commenting on this article below or tweeting at me (I won’t respond there, but I will favorite and add them here).
If you want to respond, please do so either by commenting below or annotating any part of this pub by highlighting it and clicking on the annotation button (PubPub account required).
If you’d like to be taken off the contributor list, please comment here or ping me on Twitter.
This is the end-state I prefer, too — that’s why KFG is a non-profit, produces fully open-source tech, and has adopted a membership model. But because of the high social switching costs discussed in the piece, and the amount of coordination required, I suspect private organizations with fewer constraints will have do the risk-taking to prove new models and provide examples that can be adopted more widely. I am also cognizant of many academy-led orgs that have failed due to self-imposed coordination costs, and think we need to strike the right balance of startup speed and innovation and community input, experience, and expertise.
As I (broke my own rule and) wrote on Twitter, and expanded on above, I agree — it can’t be purely private labs, and we must rely on community input and expertise. That’s what our services model is designed to do — help folks in the community overcome their barriers and work with them to co-design solutions, leaning in part on the ability of private orgs to take risks and prove out models.
I completely agree, as do most observers of academic publishing I know, and as I wrote in the piece, an ideal “disrupted” system would adopt far more effective and equitable ways of evaluating impact and distributing rewards. However, as the piece goes into in detail, engendering this shift is an incredibly complex coordination problem that I believe will require more than just advocacy or new metrics, but new kinds of institutions. Additionally, I would argue the proposed reward functions are a little too simple and would themselves be easily susceptible to monopolistic re-capture (see, e.g., Altmetric). Hence the focus on disrupting the underlying structure of the market, not just the reward system.
This is an important point and one I should have made explicit in my argument. The market is not functional due to supply-side monopolies, which is the core of the problem, and a big reason why disruption by traditional means is so hard. That said, you could have made the same basic argument about the newspaper business at the peak of its revenue in 2005. Although the two markets are not analogous, as I made clear, I do think the example offers reason to believe the supply-side could be disaggregated by the right institutions.
Respectfully, I’m not convinced that cOAlition S or rights retention agreements do much to address the underlying incentive issues. The willingness of conglomerates to sign on to Plan S and purchase open review platforms is good evidence that they do not view them as a threat to their business models. It’s wonderful for these options to be available for researchers, and any momentum helps. But until it is easier build a successful career with entirely open publishing models, and many examples of researchers doing just that exist, I fear these will remain options used by only the most privileged and ideological.
The lack of freedom to innovate is a good point, and one that could have been highlighted more in my piece. That said, I’m not sure the blame falls on libraries anymore. In my experience, many libraries are dropping enforcement of arbitrary rules and even renegotiating their contracts with publishers alongside their faculty to win better rights (e.g. retention rights for institutional repositories pointed out above). I also understand libraries’ ideological resistance to innovations that involve data collection and mining given their experiences watching conglomerates acquire analytics startups as they transform themselves into surveillance publishers. Non-profit, mission-driven organizations like Our Research, meanwhile, have had no problems innovating with data.
I agree, and should have made more clear, that the victims of the complex, as you call it, are concentrated among less well-funded researchers, particularly in the global south.
I’d never considered PubMed as one of the primary obstacles to innovation before, but as the reply makes clear, it is a major one. Inclusion in PubMedCentral is notoriously difficult to achieve, and its article requirements make it nearly impossible to deviate from the traditional journal article structure. Only last year did the service even begin piloting inclusion of preprints.
N of 1, but it’s interesting that this entrepreneur hadn’t heard of many of the startup attempts that are common knowledge within the industry. There’s a broader commentary to be had here about the startup industry’s inability to acknowledge failure that is beyond the scope of this piece, but makes me think the academic publishing tech community should be documenting them more.
Agreed. I would also respectfully submit that punctum is not a traditional startup given its deep commitment to sustainability and values. I also think the market should demand less sacrifice of new entrants than it has of you if we’re going to change things!
Naturally, Peter Suber, one of the most astute analyzers of the industry, has more to add. The point about most of the money being public is a particularly important part of the industry’s dysfunction that I didn’t cover in any detail. The point that publishers also have a government-granted monopoly via copyright is obviously an important one as well that I didn’t go into — though as above, I would argue that as publishers have shifted business models towards software, services, and analytics, the copyright monopoly has become less important to their model, hence increasingly agreeing to more lenient stances allowing self-archiving, postprinting, Gold OA-style embargo periods, etc.
Though I am appreciate the thoughtfulness of this thread, I find it does not engage fully with my original piece’s warnings about overcoming the status quo. Many people have attempted to create a “generalized platform that democratizes access to truly Internet-Native research,” with varying degrees of success as a business, but little disruption to show for it. Off the top of my head: Authorea, Experiment.com, ResearchHub, F1000…the list goes on. This is assumption #5 in my piece, that it’s a sociotechnical problem.
But Stein’s framing in terms of “social switching costs” hitched to a researcher’s three constituents (the university, funder, and library), while insightful, risks obscuring two facets of the problem. The first is that it’s really one of those constituents, the university, that imposes the switching cost, by way of the tenure, promotion, and hiring systems.
I agree that the focus should be on the universities, but I would still maintain that funders impose heavy switching costs. Even the most progressive ones with open access mandates won’t go so far as to require their researchers opt out of journal publication, even as they pay T&F-owned F1000 to host alternative platforms. There’s some truth to the argument that they can’t do this because of the universities’ role in maintaining the prestige system, but I’d like to at least see one try and dare its researchers and the institutions they work for to refuse funding.
The second missing piece here is the role of researchers’ own beliefs: Many scholars have internalized the journal prestige economy. It’s not merely a complex opportunity structure or the friction of too many moving parts. The bigger issue is that (1) scholars want to get published in Nature, which (2) universities reward in formal and informal ways, often (3) drawing on other scholars who serve every step of the tenure, promotion, and hiring processes. The problem of journal prestige lock-in has a lot to do with belief and culture, in other words. One implication is that culture change, and efforts to overhaul tenure-and-promotion and hiring criteria, are where a lot of our attention should be.
This is a great point, and one I should have made more strongly in the piece. My bias is showing here: most of the researchers I work with do not like the prestige system. As Pooley points out, though, it’s likely true that most scholars support the prestige system. That cultural barrier a key reason why publishing startups of all stripes fail, which they should understand deciding to enter the market. I also agree that much of our attention should focus on advocating for culture change, which is another reason why I think traditional startups are ill-suited to the task and I propose an organization with advocacy as a core function.
I found the initial framing around for-profit startups and private research labs a bit jarring. In my view, profit-seeking and the university system are fundamentally misaligned.
I understand this discomfort, and want to reiterate, as above, that I agree, which is why I proposed a new type of non-profit institution as a possible solution. The piece was written with a primary audience of startup founders (and their funders) considering the academic publishing market in mind, which is why I started with arguments that match the way they think.
I’ll offer a small amount of pushback that will likely be uncomfortable, though: given the academy’s internalization of prestige that Pooley points out, I argue for an all-of-the-above approach to changing culture, rather than a more limited one. Whether we like it or not, next-gen privately funded labs (some non-profit, some for-profit) will be producing an increasing amount of research and publishing it in non-traditional ways. Unlike publishers, their publishing programs are not revenue generators, which is why I feel comfortable working with them. In fact, in my experience so far, many of them are more aligned with radical visions of publishing than even the most progressive university-based publishers (and sometimes even more than independent scholar-led ones). I suspect that by trying new models in environments more conducive to risk-taking, these organizations can provide examples that reduce the anxiety around switching for the researchers at universities who are inclined to be more progressive. All to say, I think it would be unwise to not engage these folks in trying to build the system we want to see, and I think there’s enough value alignment even with many for-profits to do so without replicating the issues of the status quo.