Editors of several journals in the field of hydrology met during the Assembly of the International Association of Hydrological Sciences—IAHS (within the Assembly of the International Union of Geodesy and Geophysics—IUGG) in Prague in June 2015. This event was a follow-up of a similar meeting held in July 2013 in Gothenburg (as reported by Blöschl et al., 2014). These meetings enable the group of editors to review the current status of the journals and the publication process, and share thoughts on future strategies. Journals were represented in the 2015 meeting through their editors, as shown in the list of authors. The main points on fostering innovation and improving impact assessment in journal publications in hydrology are communicated in this joint editorial published in the above journals.

In the last few decades, the dominant practice of universities, governments, and research funding organizations in assessing individuals or research proposals has been to use the number of papers published—sometimes separating those in high-impact journals—and number of citations as the main benchmarks, rather than true innovation (including new ideas, original methods, discovery, and improved application of technology). This has resulted in consistently increasing pressure to publish in journals—the “publish-or-perish” syndrome. In turn, this has transformed the publication industry (e.g. with the creation of numerous for-profit publication vehicles) as well as the peer review system per se. Specifically, with the plethora of journals, “peer review […] is becoming a system that judges where work is published rather than whether the research is publishable (a ‘where rather than if’ process)” (Peres-Neto, 2015). In the majority of journals represented in this editorial, submissions have dramatically increased. As a response, some of the journals have increased the rate of desk rejections, i.e., rapid rejections by the editor without sending the papers out for peer review, with the objective of reducing the pressure on the review system.

It is the common agreement of all editors that the peer-review system is a key component of the publication process and essential for scientific progress of the community. Maintaining the highest quality of the peer-review process is thus crucial. However, the system has several weaknesses. Some of its critics have characterized it in strong language, e.g., as a “non-validated charade whose processes generate results little better than does chance” (Horrobin, 2001), and a recent editorial Comment in a medical journal (Horton, 2015) stated, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” After completing a systematic survey of more than 1000 manuscripts submitted to three elite medical journals, Siler et al. (2015) concluded that “on the whole, there was value added in peer review,” even though “both errors of omission [rejecting a worthy article] and commission [publishing an unworthy article] were prominent.

Another symptom of the “publish-or-perish” syndrome is that research is becoming more fragmented. The same body of research is often split into a number of papers (a tactic sometimes referred to as “salami publishing”). Such tactics may improve individuals’ citation counts and other bibliometric indices, but they also reduce their representativeness as indicators of scientific impact. The increasing number of publications, number of entries in the reference lists, and average number of authors per paper, have all markedly increased the total number of citations in recent years. Multi-author papers are mushrooming, going to several “kiloauthors” in some disciplines.1 Such papers may reflect large-scale collaborations within the community and therefore may be appropriate, but quite frequently one actually notes that their content does not justify the involvement of several scientists. Just sharing an opinion is not a sufficient scientific contribution to justify co-authorship of a paper.

The above transformations make the review process less efficient, and amplify its weaknesses, thus making the identification of truly innovative papers more difficult, both during the peer review process and after publication. The poor ability to identify innovation is a known problem of the peer-review system. Scientists tend to be conservative in their assessments, i.e., favor mainstream and conventional wisdom, and are therefore less supportive of truly original research. A characteristic example is the paper by Beven and Kirkby (1979), one of the most cited hydrological papers ever (expected to exceed 5000 citations soon, according to data from Google Scholar), which was rejected by one journal before being accepted by another.2 The overloading of peers with review requests exacerbates the above weakness, so that modest papers may have low probability of rejection, while truly outstanding ideas are less likely to be recognized. A recent study showed that an increasing number of excellent papers were initially rejected (Siler et al., 2015). Likewise, published papers of outstanding quality may not always be as visible as they deserve.

We believe there is a lot the hydrological community can do to improve the situation.

(1) Increasing awareness of the publication predicament

We believe that raising awareness of the community about the problems is a first necessary step. Awareness of science’s goal of the pursuit of truth and discovery (rather than the support of any non-scientific objectives) is essential. This is fully consistent with the objectives of the peer-review system.

(2) Change in research evaluation practice at large

In order to address one of the main causes of the “publish-or-perish” syndrome, a change in the way science is evaluated may be necessary. Rather than counting the number of papers and citations, it would be preferable that selection committees, promotion panels, and review panels put on center stage the innovation and ideas in the scientific contributions of individuals and institutions. It is realized that this may entail more extensive efforts, as a thorough engagement in the actual science progress will be needed. Such a change could be facilitated by the journals (editors, reviewers, authors, scientific publishers) and bibliometric services highlighting novelty in the papers. Dedicated discussion forums and workshops are needed, perhaps during scientific conferences, and scientific associations should recognize the profile of scientists working toward this target. This movement towards a better appreciation of innovation in place of counting numbers is already implemented in a number of science councils and honor committees. Web publishing and web-based impact assessments will likely play a role in the future, but it is questionable how they could assist in putting innovation (quality) over numbers (quantity).

Besides the huge increase in publications there is an inflation of evaluations. Research cannot and should not be measured as industrial production. Important results may require time for development, in particular if interdisciplinary approaches are followed, and early publication of unripe papers may hamper the progress of important contributions. Evaluations are necessary in cases of promotion or tenure, but should not excessively increase the pressure on scientists.

(3) Multi-author papers and modifications in citation metrics

A large number of authors makes it difficult to judge the contribution of each and every author. Scientists should be listed as authors only if they have justifiably contributed to the study, and the number of authors must be commensurate with the extent and importance of the study. Editors and reviewers should check whether the number of authors is justified.

The dominance of the h-index as the principal evaluation metric of individuals has been one of the drivers of the surge of multi-authored papers. However, there are biases related to the independent count for each author. An extreme example from physics is the article by Aad et al. (2008), where 2926 authors describe the ATLAS detector in its experimental cavern at CERN. The 1398 Google Scholar citations (as of 25 Jan. 2016) are counted 2926 times, resulting in a total of 4,090,548 counts. Even though citation metrics should only be a secondary criterion in research evaluation, there may be merits in modified metrics, e.g., replacing the standard h-index by a normalized index3 that distributes the total number of citations to the individual authors in some way (e.g. by assigning 0.48 = 1398/2926 citations to each author, instead of 1398, in our example). If such a modified index became the norm, it would probably help refocus collaboration among researchers towards the science interactions alone.

(4) Change in culture in the peer-review process toward enhanced transparency

All players in the peer-review process can help enhance the chances for outstanding papers to be published. Authors can help by practicing clarity, disclosure, and transparency of data, derivations, algorithms, argumentation, and presentation at large. Journal editors can help by clarifying the requirements for acceptance, by better defining the reviewers’ roles and responsibilities, and by allowing for diversity, e.g., by publishing negative review comments along with a paper (provided the reviewers agree and are eponymous) and encouraging formal discussions (comments and replies). Reviewers can help by adhering to a structured approach of evaluating papers. There is, for example, no need for a positive answer to any of these questions:

  • Do I agree with what the author says?

  • Is the paper friendly to my own research publications and ideas?

  • Does the paper comply with the body of literature I have in mind?

  • Does the paper comply with the consensus ideas on its area?

  • Does the paper help save the world (e.g. from threats and disasters)?

In contrast, an affirmative answer is needed for these:

  • Is the paper clear and correct (not ambiguous; not arguably mistaken)?

  • Is the paper important (not trivial)?

  • Is the paper new and innovative (not repeating known things, not copied)?

  • Is the paper reporting results that are sufficiently supported and may be of use for other regions, studies, or questions?

Additionally, other qualities of a paper should in fact favor publication, even though they are often regarded as reasons for rejection, for example:

  • a controversial attitude,

  • provoking discussion and thought, and

  • challenging established ideas, methods, or wisdom.

(5) Change in culture in linking research studies to each other

There is also a lot that our community can do to reduce the fragmentation and contribute to knowledge building and capitalization of the community as a whole. The social and medical sciences have a strong tradition of linking individual studies by meta-analyses and evidence synthesis (Slavin, 1995; Sutton et al., 2009) and there is also increasing awareness in the physical sciences of a need for better synthesis (Jackson and Baker, 2013). In our role as editors, we aim to support the synthesis efforts that build on earlier studies across all hydrology journals. There is a proposal to establish a jointly agreed protocol for meta-data that would be archived along with published papers, inspired by a similar initiative in the medical sciences (Moher et al., 2009). The protocol would apply to studies reporting on specific catchments and would include codified hydrological information, such as:

  • location, possibly exploiting the World Meteorological Organization (WMO) division of Earth into Regions and Subregions (Fig. 1);

  • visual information, including a map and a characteristic photo;

  • size information, such as total catchment area and longest river length;

  • elevation information, such as minimum, maximum and average altitude, and possibly hypsographic curve;

  • codified information on geological and hydrogeological characteristics and land use of the catchment;

  • seasonality of rainfall and temperature, possibly in terms of a climatogram4; and

  • characteristic flow quantities, such as multi-year average flow (in absolute terms and per unit area) and flood flows for specified return periods (e.g., 10, 100, 1000 years, whenever possible), as well as information about the manner in which this information was extracted (estimated or measured and years of measurements).

The editors welcome suggestions from the community for such a protocol (e.g., in the form of comments on this article). Suggestions for protocols that could apply to other types of studies are also welcome.

It is likely that, over the longer term, many scientific journals (and research sponsors) will require full disclosure of all data and models used before acceptance of manuscripts. This will additionally facilitate synthesis and enhance the collaboration across research groups beyond long author lists. It will also help enhance the peer-review process, going beyond assessing the consistency of the results towards a test of the results through full repeatability of the studies (cf., Skaggs et al., 2015). Research evaluation at large will also benefit from such a development to better appreciate excellence. The attitude of individuals within the scientific community to further science by adopting transparent approaches will remain critically important.

Winston Churchill once said: “Democracy is the worst form of government, except for all those other forms that have been tried from time to time.” Similarly, the peer-review process is not perfect, but it provides a route toward unbiased, robust, and timely assessment of scientific thought before it becomes public and—importantly—before its application and use in decision support. The improvements suggested will help enhance the peer-review process, which, despite justified criticism, remains a highly valuable voluntary community service that contributes to the value of science in society and to the reliability of scientific results. We hope that, in addition, the improvements will help the hydrological community to grow from strength to strength in order to address the grand water challenges of the 21st century.

5
Also published in Hydrology and Earth System Sciences (doi:10.5194/hess-18-2433-2014), Hydrology Research (doi:10.2166/nh.2014.006), Journal of Hydrology (doi:10.1016/j.jhydrol.2014.03.055), and Water Resources Research (doi:10.1002/2014WR015613).
This is an open access article distributed under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).