Scientific publishing is a necessary component of scientists’ efforts to establish a career in science for themselves. The process allows for scientists’ peers to enrich new findings by reviewing papers, curates research from scientists across the world, and signals the credibility of some scientific work.
One important drawback of the extant publishing model worldwide is that it extracts a steep cost for this value addition – in the form of substantial subscription fees or substantial article processing charges (APCs). These models impose a considerable disadvantage on those operating with smaller research budgets, by limiting their ability to publish in ‘good’ journals and to access new research.
Why are radical options required?
Many calls for open-access publishing continue to identify ways to make science more accessible by increasing the funds available to pay subscription fees or APCs. But doing so will only reinforce the current market model, which has been cornered by a few publishing houses, and slow innovation in the way we disseminate science.
It may be wiser to create new alternatives to publishing houses themselves rather than determine how to fund more open-access publishing within the existing system. For this to happen, science societies and academies could cooperate for a novel way to evaluate scientific work.
Currently, publishing houses coordinate three aspects of the publication process: peer-review, formatting, and publishing. Given recent technological advances, are publishing houses best-suited for this process?
Consider formatting, for example. Likely the simplest of these three activities, formatting is crucial to the way end-users consume the information in a scientific paper. But in this day and age, journals’ standard format may actually be a constraint. Papers are typically required to have sections called ‘abstract’, ‘introduction’, ‘methods’, ‘results’, and ‘discussion’, in that order. There is an implicit expectation to write the paper in technical language readable only by other scholars working on the same topic.
Instead, perhaps a short video of the scientist performing her experiment may better communicate the impact of her work to her peers and the people at large. Or a comic strip to help non-experts get interested in the work. Or perhaps a slide deck, like the ones consultants use, can better entice funders.
Where can research be hosted?
Currently, scientists are encouraged to share their findings in a more accessible manner after their paper is published. But there is also an opportunity cost to redesigning the content once the paper has been published.
Scientists, already steeped in the next steps of their research, would have to set aside time to create these additional resources. (Some institutes have started to include time and credits for science communication and outreach but such incentives aren’t common.)
More importantly, in this age, the publishing part can be deputed back to scientists. Most researchers and research institutions have their own servers, which can host this content. Giving scientists the freedom to publish their peer-reviewed research in a format and platform of their own choosing could better help make their work more accessible.
If research institutions own the server and copyrights to their researchers’ work, they could also maintain their work in different languages.
This said, such a radical reimagination of how scientific work is published needs to reckon with an important issue: peer-review.
What is the purpose of peer-review?
Peer-review is journals’ most important value-addition to the publishing enterprise. Journals have the manuscripts they receive reviewed by experts, typically on a voluntary basis. The peer-review process is supposed to help authors strengthen their research and the way they present it. It evaluates work for originality, contributions, and design. However, peer-review as it is currently practised has devolved into a gatekeeping exercise in which journal editors reject papers.
The approach that the journal eLife recently adopted recognises this critical difference: its editors no longer accept/reject a paper once it has been reviewed. Instead, they publish “reviewed preprints” plus an internal assessment and public reviews on its website. The assessment describes the work’s contributions to a field and its design. So eLife has identified itself as a medium for enriching science, not gatekeeping.
Will other publishing houses follow suit? Journals don’t appear to be budging from their well-paying business models (some publishing houses make up to 40% profit margin) to open up science for public consumption.
What is the impact factor?
Will authors continue to publish in other journals? The current system incentivises publication in high-impact-factor journals. The impact factor (IF) of a journal – the average number of citations each of its papers receives every year across two years – has become critical to attract funding or employment. But in doing so, the IF does disservice to broader science.
First, its value is associated with how frequently papers from the journal get cited, and doesn’t necessarily reflect how good a paper is. Papers in high-IF journals have also been accused of scientific fraud; some of these journals also have a higher rate of retractions.
Second, it doesn’t account for the ease of doing research across laboratories. This rewards labs in countries with better research resources. But surely, from the perspective of an Indian employer with the existing funding constraints, the ability to work resourcefully should have more merit than having published in a coveted journal at exorbitant costs.
Third, the culmination of this process is that science remains behind paywalls, or even unpublished, as journals demand more experiments or those with certain results to deem a paper to be “publication worthy”.
Fourth, scientists approach science with a view to publish in journals, so journals have acquired a ‘licence’ to dictate research design over simply pursuing good science. Negative experiments in particular are shunned, even if they can add to the pool of scientific knowledge.
A way to measure research quality
What could be an alternative? Perhaps it is time for science academies to step into the science promotional work that they were mandated with. The three academies in India can agree on a research evaluation metric – novel contribution to the field, funds used (as a proxy for resourcefulness), research design – that peer-reviewers can use to rate submitted findings.
Along with subjective feedback, reviewers can give an objective rating. Scientists can then have the option to publish their work as is or along with reviewer feedback and rating. The peer-reviewed work can be used by indices such as Google Scholar or PubMed. Any funders/prospective employers can also use the evaluation metric to best gauge the scientist’s credibility based on their research setting, and not their ability to publish in the journal of a certain stature. Scientists can initially choose to send their papers to journals in addition to the academies, since publishing in a coveted journal also harbours a sense of prestige.
Technological advances in knowledge curation and dissemination should render publishing models that keep science locked up outdated. The advent of large language models and tools for catching image fraud, even a basal peer-review could be eventually outsourced, reducing the burden on experts. The path to open science lies with our scientific institutions, and requires a radical move to break the monopoly of publishers.
Shambhavi Naik is a researcher at The Takshashila Institution.