American Mineralogist: Journal of Earth and Planetary Materials received more citations than any other journal in mineralogy, petrology, and crystallography (Table 1), according to the Thompson-Reuters Journal Citation Reports of 2015 (for citations in 2014). We expect the journal to retain that mantle in the soon-to-be-released 2016 report. The current Journal Impact Factor—a lagging indicator to be sure—still hovers near 2, but JIF is only one indicator of journal influence. Together the JIF and total citation numbers show that we publish a lot of papers, many of which are very well cited, some of which are not—but overall, the journal garners considerable attention, and our articles have staying power. However, while we take no displeasure at the total citations ranking, and will conclude this Editorial by using these rankings to stoke support for society-published journals, the means by which scientific quality and influence are ranked requires critical review. Some well-worn indices that supposedly measure scientific influence or accomplishment are misunderstood. And we should take care not to confuse “citations” with “success”; the two may be related, but are not synonymous.

Total citations (Table 1) are probably not a completely unfair measure of journal influence; in the JCR report, they represent (by all appearances) the sum of all citations that accrue to a journal in a given year, to articles published in any year. Total citations thus take a longer view than the JIF, which represents mean citations per article, calculated over a 2-year period (so the 2015 JIF report averages citations per article in 2014, for articles published in 2013 and 2012). The disparity in the total citations and JIF rankings belies the commonly held view that if papers are not cited quickly, they are never cited. Unlike the JIF, though, yearly rankings for total citations are not freely obtained or frequently used. But why shouldn’t they be? Total citations are often used to evaluate individual articles and the scientists who write them. The H-index can also be applied to journals—and as may be intuited from the total citations rankings, the H-index for American Mineralogist is quite high (Table 1).

As baseball fans, especially Fantasy League team owners, well know, it is dangerous to rely on a single statistical parameter for predictive purposes. Fantasy League teams are ranked using eight or more parameters (batting average, home runs, earned run average), but in the pre-season, Fantasy League baseball “owners” select their players using an even wider range of statistics—just like real team owners do—attempting to predict which players will perform best in the upcoming season. Measuring success in science is no different. A few scientific journals, to be sure, carry especially great influence, at least with regard to citations (Putirka et al. 2013). But no journal ranking predicts the success of any individual paper. We are pleased to note that among 330 journals classified as “Geoscience” by Thompson-Reuters, American Mineralogist ranks in the top 5% with respect to total citations (Fig. 1) and H-index. These top 5% of journals account for greater than a third of all citations (Fig. 1) that were garnered by all 330 journals. Journals that rank among the top 15 and 42%, respectively, account for 50 and 90% of all citations. (As in Putirka et al. 2013, journals and citations from the Journal of Geophysical Research are excluded, since these are treated as a single journal in the citation database.) Surely, these top 5% of journals are influential. But Earth-Science Reviews, which does not break the top 10% in total citations, has the highest H-index, while Nature Geoscience garners 20% fewer citations compared to American Mineralogist, but has much the highest JIF. And with JIF there is a caveat: the JIF-champ attains its status from highly cited articles in climate change and environmental science, not mineralogy or petrology [see Putirka (2013)]. Perhaps more interestingly, despite lower rankings, Canadian Mineralogist, European Journal of Mineralogy, and Mineralogical Magazine each publish articles that are cited many hundreds of times. So a popularly cited paper can appear anywhere. This happens because our attention is no longer tied to journal titles—articles catalogued by the major databases, such as GeoRef and Web of Science (pretty much any journal familiar to you), are equally visible, to be used or ignored as the community sees fit.

But we don’t need citation data to tell us this: many of the most important papers in our discipline (by authors such as A. Day, R. Daly, N.L. Bowen, H.H. Hess, T.N. Irvine, L.R. Wager, M.J. O’Hara, and many more) have been published in more modest or even obscure journals. But what might we say of weakly cited papers, regardless of the journal in which they appear? Cole and Cole (1972) suggest that well-cited works cite only other well-cited works, and imply that weakly cited papers represent wasted efforts. In some cases, this may well be true. But Cole and Cole assess value through a bibliometric lens, and that lens is fogged. The free energy concepts of Josiah Willard Gibb’s (1874–1878) and Guldberg and Waage’s (e.g., 1864) progress on a law of mass action were foundational to the development of modern chemistry (and, nearly synonymously, mineralogy and petrology), but their works were only narrowly appreciated in the years following publication. Their ideas, however, were never completely lost from scientific consciousness and eventually grew to be monumentally influential. Citation rates in modern papers provide a hint of such potential. Highly cited papers (from Table 1, published in 2012) contain up to 3–9% of citations to papers that have garnered only 1 or 2 citations (as of Dec. 2015); some highly cited papers contain up to 20% of citations to papers that have garnered ≤1 citation/year. This is a long distance from the “Ortega hypothesis,” which suggests that great science “stands upon the shoulders of mediocrity.” But, clearly, not all low-citation rate papers are useless. Some are valuable. Some, such as those by Gibbs, Guldberg, and Waage, are revolutionary.

Unless we want to accept a high risk of grave error, we are not at liberty to judge scientific influence or importance by total citations, citation rates, or H-index alone. If you are disagree, try this test: write down your selection of a dozen or so of the most influential scientists of all time (excluding the Hellenistic period and earlier); consider whether your list includes Kuhnian revolutionaries. Then compare your results with Scholarometer ( http://scholarometer.indiana.edu/explore.html ), a web site that uses the H-index to rank scientists. If you are a fan of the H-index, and Isaac Newton is on your list, you may want to re-think his importance. Google Scholar assigns Newton an H-index of 44, but this value is too high; many of the publications listed at Google Scholar are not Newton’s (e.g., biographies and multiple editions of “Opticks” not published within his lifetime). Let’s grant Newton a score of 44 anyway; he would rank 77th among physicists. Dozens of geochemists and geophysicists have H-indices that are higher, as do many other active scientists, sinking his overall ranking even lower (out of the top 600 according to Webometrics http://www.webometrics.info/en/node/58 ). In fact, among Scholarometer’s top 100 physicists there is a notable absence of names that would be familiar to readers of introductory college Physics textbooks (Ed Witten ranks 1st, H-index = 176; Stephen Hawking ranks 15th, H-index = 90). Albert Einstein (not listed) would rank 6th, with an H-index of 108, but his index, like Newton’s, is inflated. Among Biologists, Darwin ranks 26th, with an H-index of 112 (again, an inflated number). Few pre-mid-20th century geologists are listed. Arthur Holmes has a better-than-mediocre index of 41. Wegener’s is a paltry 17 (perhaps he wasted too much time pursuing his continental drift ideas). Poor Antoine Lavoisier—his H-index is only 19; had he not been executed during the French revolution, perhaps he might have amounted to something. Webometrics lists 669 scientists with H-indices >100 (topped by Sigmund Freud at 257). Fourteen of these come from institutes named for Max Planck, who is not listed. Richard Feynman, Paul Dirac, and David Bohm are by this measure of only middling influence, with indices between 58 and 63. Properly tabulated, Einstein and Darwin would not make the cut.

Not yet convinced? Let’s try Scholarometer’s rankings of various disciplines, by (what else?) mean H-index among practitioners. “Obesity” (19th) outranks Physics (32nd); organic geochemistry ranks 50th; “paleobotany” ranks a respectable 44th; climate is 38th, but climate modeling is 75th (perhaps this makes sense). There are no listings of anything with “Earth,” “Planet,” or “Mineral,” leaving our discipline somewhere behind “training” (28th), and “dermatology” 94th. To be fair, dermatology likely encompasses skin cancer research, so is arguably more important than, say, geothermometry. Conceivably this is reasonable to some; one might argue that Newton’s Principia and Opticks merely showed promise—if only he hadn’t wasted most of his time on biblical prophecy. Another possibility is that the H-index doesn’t mean what we think it means, and perhaps counts for very little.

To an informed baseball fan, though, this all makes sense. Who among Sandy Koufax, Don Drysdale, Orel Hershiser, or Don Newcombe was the best pitcher? Some fans might be so bold as to choose a non-Dodger. But there’s no right answer. Do we emphasize total wins, wins in the World Series, ERA, post-season ERA, complete games, no-hitters, or strikeout-to-walk ratios? All are useful. But none predict which pitchers changed the game or led their teams to championship dynasties. Whether we rank journals, scientists, or baseball players, the problems are the same. For baseball, it’s actually a tad easier, since at the end of each game, a team or player earns a “W” or an “L”, and as Nate Silver (2012) notes, baseball is “data rich,” with a very long history of fans attempting to rank players and predict success. Ironically, in the more difficult case of evaluating science, we use fewer parameters in our model.

In the sciences, our “wins” are inflections—changes in the trajectories along which we practice science, or understand the natural world. The more distinct the inflection, the more important or influential a given article, scientist, or body of work. We haven’t even begun to test whether or how citations could predict (at least looking backwards) recognizable scientific inflections. And philosophers of science are still mostly stuck evaluating highlight reels, e.g., Copernicus, Newton, Einstein, etc., even though inflections occur in all sciences. Some inflections are small; larger examples qualify as Khunian revolutions or paradigm shifts. But science has yet to find its equivalent of a Bill James—the baseball statistician who was the first to use a wide range of non-traditional statistics to predict team wins and individual performance. He originated “sabermetrics,” which is kind of like “bibliometrics,” except that it actually works.

In our pre-Bill James era, we judge science using a few bibliometric parameters like JIF, total citations, and H-index, as if they have intrinsic value, just as baseball fans used to use RBI and ERA. It’s like rooting for your favorite team to lead the league in home runs, with no concern as to whether or not they reach the playoffs. Which do you prefer: a paper with lots of citations, but that is ultimately proven wrong, or a paper with few citations, but decades later is appreciated for its prescience? Various measures, such as JIF, H-index, ERA, or RBI, implicate a form of quality. But none define it. Bibliometric devotees are still searching for better indices (e.g., Moed 2010), to rectify well-understood flaws (e.g., Seglen 1997), as do baseball fans. Okrent (1979) invented the WHIP (WHIP = [walks + hits]/innings pitched), to gauge pitching prowess, and it has proved useful to Rotisserie League addicts. But baseball fans know that WHIP, or the newer WAR (wins after replacement), or the dozens of other sabermetric parameters do not in isolation provide un-erring scales of quality or accomplishment. Hall of Fame votes and MVP awards hinge on a wide range of numbers and also a sense of judgment. As a study in contrast, Hirsch (2005) suggested something that no Fantasy Leaguer or baseball scout would ever dream of doing: using a single number, i.e., the H-index, to assess accomplishment, in this case that of scientists. Hirsch (2005) shows that sub-sets of Nobel Prize winners and members of the National Academy of Sciences have similar H-indices, both ranging from about 20 to 77, and averaging 38 to 44. These ranges of H-index alone intimate a problem (not to mention the near normal distribution). But there is also no indication that the H-index has predictive power. Hirsch (2005) suggested that his H-index would allow an unbiased judgment of scientists’ “importance, significance, and broad impact,” but as a scale, it measures none of these: the H-index cannot distinguish between a paradigm re-making revolution, or a useful but non-revolutionary career, however productive and well cited. Being in a pre-Bill James era, we have no choice but to allow informed judgment, however flawed by bias, to reign. And it can, even without reference to citations—unless we want to allow that scientific awards of the 20th century were mostly granted in error.

So why are society-published journals, like American Mineralogist, worth supporting? Part of the answer lies in the late 19th century, when the beginnings of modern chemistry, mineralogy, and petrology hinged upon the barest recognition of what is now deemed essential and revolutionary work, by Gibbs, Guldberg, Waage, Van Laar, Boltzman, and others, who were not lavishly celebrated in the immediate wake of their most influential publications (and a host of other scientists who contributed to such work but were never celebrated). Modest, society-published journals have long been the home of lasting theories that form the foundations of our modern works (Chang 2004; Putirka 2015). American Mineralogist, among other society-published journals, allows the scientific community to maintain such conversations. Besides this essential service, there are other reasons to support MSA. The society is a non-profit publisher: with no shareholders to satisfy; we can sell books at the cost of printing and publish journals at the cost of production. Short courses are organized at the cost of organization, and grants are awarded to promising students. Your MSA membership fees support these activities, as do an army of unpaid volunteers: reviewers, Associated Editors, MSA Lecturers, RiMG and Elements guest editors, and MSA council members. So take solace in the fact that there are no intrinsic advantages to having your paper published next to an article on genome editing, nor a disadvantage to supporting a society-published journal—provided scientists do not lose their sense of judgment, good science will be recognized wherever it is published.