Description
Why traditional startups struggle to disrupt the academic publishing industry.
Well, some people read my last blog post about startups and academic publishing, which doesn’t usually happen. Thanks to everyone for reading, and for providing awesome feedback on Twitter.
I’m not active on Twitter for a variety of reasons, mental health and not wanting to do free work for surveillance capitalists chief among them. But I’d like to meaningfully engage with, and give folks credit for, their helpful feedback. Plus, I realized it served as a great opportunity to experiment with the very new models of evaluation we need to see become mainstream in a low-stakes environment.
So, taking a page from Arcadia Science, I’m using this post to:
Collect and acknowledge the most useful feedback on the original post.
Respond to it where needed.
Give authors of helpful feedback credit for their thoughts as contributors to this Pub.
Using our Connections feature, make this Pub a "Reply” to the original article so it’s easily findable from the original.
Mint DOIs for the original piece (https://doi.org/10.21428/43418882.9eeb0e4a) and add a reply relationship to this piece (https://doi.org/10.21428/43418882.d60133f0), so that contributors have a citable piece they can add to their ORCID (automatically if setup) and CV. Once updated, the relationship will also appear in the public Crossref API, allowing it this reply to be discovered and surfaced alongside the original.
Because this wasn’t an academic piece, I’m not going to mint a DOI for either the original post or this pseudo-review (unless folks think I should/want to cite it, I guess?). But if it were, and I did, it would mean everyone who contributed feedback would have a citable piece that would automatically go on their ORCID profiles (if accepted) and that they could put on their CVs.
Finally, I want to acknowledge a few shortcomings/open questions:
This is too hard to do on PubPub right now from a UX perspective. Among other things, we should make adding connections as simple as adding a link (pretty easy, on the roadmap), and we should be tracking tweets about Pubs and making it simple to pull them into Pubs and automatically (much harder, probably requires special Twitter API access and lots of funding) and link them to ORCIDs (maybe their API can be expanded to include search by “personal websites”?). Also, we should add the ability to Tweet out comments (and put them on other social networks).
Is it a good thing for the author to curate their own feedback? I’m certainly incentivized to leave out really negative comments. On the other hand, people can show up here and reply inline, or make a lot of noise on Twitter and elsewhere (or their own linked Pubs) if I really act in bad faith. But we could maybe use another layer of evaluation where someone trusted steps in and says “Gabe did a good job summarizing the feedback.”
What roles should contributors to posts like these have? I invented the “Feedback” role for this because the CRediT Taxonomy we support, as well as the other Crossref/Bibtex roles we support natively, don’t really offer a good fit.
How should we include the tweets themselves in the feedback graph I’m creating? Make each one a connection that gets deposited? Rely on Event Data? DocMaps? Relatedly, how can we archive those Tweets so if they’re deleted or otherwise disappear, the scholarly record remains in tact?
How to ask for permission to include people as contributors at scale? (Note: if you want your name to be removed from the contributor list, please let me know).
How to automatically notify people that they’ve been included in something like this (probably via Twitter — would happen via ORCID if I deposited this).
What connection type should this Pub have? It’s clearly not a “review” in the sense that Crossref or Datacite means it, but it’s not just an author response, either. It’s almost like a literature review of feedback — a “feedback review” or something.
I’m sure new helpful feedback will come in over time. I can always create a new release, but it would be nice to have more of a sense via the UI that the Pub is meaningfully updated.
What should we title these things? There’s gotta be a more interesting/variable format than ‘Feedback on X’, surely.
Please comment on the above open questions if you have any meta-feedback.
If I have missed any useful feedback you saw on Twitter or elsewhere, please let me know by commenting on this article below or tweeting at me (I won’t respond there, but I will favorite and add them here).
If you want to respond, please do so either by commenting below or annotating any part of this pub by highlighting it and clicking on the annotation button (PubPub account required).
If you’d like to be taken off the contributor list, please comment here or ping me on Twitter.
This is the end-state I prefer, too — that’s why KFG is a non-profit, produces fully open-source tech, and has adopted a membership model. But because of the high social switching costs discussed in the piece, and the amount of coordination required, I suspect private organizations with fewer constraints will have do the risk-taking to prove new models and provide examples that can be adopted more widely. I am also cognizant of many academy-led orgs that have failed due to self-imposed coordination costs, and think we need to strike the right balance of startup speed and innovation and community input, experience, and expertise.
As I (broke my own rule and) wrote on Twitter, and expanded on above, I agree — it can’t be purely private labs, and we must rely on community input and expertise. That’s what our services model is designed to do — help folks in the community overcome their barriers and work with them to co-design solutions, leaning in part on the ability of private orgs to take risks and prove out models.
I completely agree, as do most observers of academic publishing I know, and as I wrote in the piece, an ideal “disrupted” system would adopt far more effective and equitable ways of evaluating impact and distributing rewards. However, as the piece goes into in detail, engendering this shift is an incredibly complex coordination problem that I believe will require more than just advocacy or new metrics, but new kinds of institutions. Additionally, I would argue the proposed reward functions are a little too simple and would themselves be easily susceptible to monopolistic re-capture (see, e.g., Altmetric). Hence the focus on disrupting the underlying structure of the market, not just the reward system.
This is an important point and one I should have made explicit in my argument. The market is not functional due to supply-side monopolies, which is the core of the problem, and a big reason why disruption by traditional means is so hard. That said, you could have made the same basic argument about the newspaper business at the peak of its revenue in 2005. Although the two markets are not analogous, as I made clear, I do think the example offers reason to believe the supply-side could be disaggregated by the right institutions.
Respectfully, I’m not convinced that cOAlition S or rights retention agreements do much to address the underlying incentive issues. The willingness of conglomerates to sign on to Plan S and purchase open review platforms is good evidence that they do not view them as a threat to their business models. It’s wonderful for these options to be available for researchers, and any momentum helps. But until it is easier build a successful career with entirely open publishing models, and many examples of researchers doing just that exist, I fear these will remain options used by only the most privileged and ideological.
The lack of freedom to innovate is a good point, and one that could have been highlighted more in my piece. That said, I’m not sure the blame falls on libraries anymore. In my experience, many libraries are dropping enforcement of arbitrary rules and even renegotiating their contracts with publishers alongside their faculty to win better rights (e.g. retention rights for institutional repositories pointed out above). I also understand libraries’ ideological resistance to innovations that involve data collection and mining given their experiences watching conglomerates acquire analytics startups as they transform themselves into surveillance publishers. Non-profit, mission-driven organizations like Our Research, meanwhile, have had no problems innovating with data.
I agree, and should have made more clear, that the victims of the complex, as you call it, are concentrated among less well-funded researchers, particularly in the global south.
I’d never considered PubMed as one of the primary obstacles to innovation before, but as the reply makes clear, it is a major one. Inclusion in PubMedCentral is notoriously difficult to achieve, and its article requirements make it nearly impossible to deviate from the traditional journal article structure. Only last year did the service even begin piloting inclusion of preprints.
N of 1, but it’s interesting that this entrepreneur hadn’t heard of many of the startup attempts that are common knowledge within the industry. There’s a broader commentary to be had here about the startup industry’s inability to acknowledge failure that is beyond the scope of this piece, but makes me think the academic publishing tech community should be documenting them more.
Agreed. I would also respectfully submit that punctum is not a traditional startup given its deep commitment to sustainability and values. I also think the market should demand less sacrifice of new entrants than it has of you if we’re going to change things!