Skip to main content

Resisting impact algorithms in academic publishing models

The problem with impact algorithms isn't that they exist, but that when they dominate, they coerce others into conforming to shapes designed to satisfy profit-driven needs rather than the community's needs.
Published onMay 25, 2021
Resisting impact algorithms in academic publishing models

One of the projects I work on, DocMaps, concerns modeling peer review (and other editorial processes) in machine-readable ways so that services like preprint servers can display information about reviews to readers. It’s all part of a broader effort to move away from the inefficient and inequitable journal review model to what we call Community Publishing, (also known as Publish, Review, Curate [PRC]).

The current model, wherein authors submit a paper to one journal, wait months for feedback, and then receive a binary accept/reject decision, is designed to serve the business models of journals, rather than the needs of science. It has myriad shortcomings for scientists and society, all described in the PRC paper above. It is, however, very good for large publishing companies, which are increasingly becoming data surveillance companies in the mold of Amazon, Netflix, and especially Facebook.

In the Community Publishing model, authors post their work openly, community members ranging from individuals to review companies evaluate the work (if they decide it needs to be evaluated! Not all work does), and curators, ranging from ad-hoc reading groups to journalists to societies to what we think of today as journals, evaluate the evaluations and decide that a set of them meet their standards for qualifying the work as impactful for their purposes. This model has numerous benefits, including that different communities can decide for themselves both how to evaluate work and what counts as impactful based on their needs.

We’re commonly asked how this model will avoid being captured by “algorithms,” meaning the same surveillance capital business models that are busy undermining democracy and being pursued by the large academic publishing giants. Of course, no technology can ever fully avoid anything, and it would be naive to expect it to. But technology can be designed with core values that make it resistant to the knowledge and power concentration currently playing out in multiple tech-disrupted industries.

We’re attracted to Community Publishing models because of the way they center community needs when considering both evaluation and impact measurement. And we’re working on building DocMaps with the same principles in mind. Rather than trying to force everyone to model their process in a single way, we provide a framework, developed with community input, that allows different communities to model their processes as they see fit. Your job as the evaluator is to model your evaluation process in the way that best communicates the value it adds to the original publication. Your job as the curator is to decide what about the processes you consume you believe are valuable for your purposes, and how to communicate that value to the readers you serve.

It’s not as easy as reducing everything to a single field-normalized score, and that’s exactly the point. The problem with impact evaluation and recommendation algorithms isn’t that they exist, but that when they dominate, they can force everyone to conform to the shape they require for their needs, usually profit-driven, rather than adapting to shapes developed for community needs, usually value-driven. In a world with multiple algorithms processing multiple shapes, there’s much less profit-motivated coercion, and much more value in analyzing impact in the first place.

Comments
0
comment
No comments here
Why not start the discussion?