Dear JSR Community,
Unless you’ve been hiding under a rock for the past decade, many of our readers have probably heard of Artificial Intelligence (AI). Loosely defined, AI refers to computing systems that can perform tasks typically associated with human intelligence. Whether you’ve felt the advent of AI as a slowly rising tide or a rogue wave washing over you, it has impacted us all to some degree. For researchers, AI models offer exciting opportunities to streamline and enhance our science. For educators, these tools are equally exciting, but we have yet to resolve some fundamental questions about the values and risks of AI in the classroom. At the heart of AI conversations is how to define ownership and accuracy. Do the materials generated by AI meet our standards and objectives? Should AI models be credited? And who is responsible when AI models produce erroneous content?
In a recent CBS News interview, the CEO of Google Deepmind discussed the ever-increasing role AI models play in our lives, how rapidly the technology is progressing, and what these models can, and perhaps more importantly, what they cannot do. One particularly poignant statement from the interview is that AI models are expected to become integrated into nearly every aspect of everyday life within the next five to ten years.
And what about the role of AI in scientific publishing? We know that AI is now routinely being used to help manage and analyze large datasets, and that AI language models are employed by some authors to write and revise manuscripts. And why not? It would be backward thinking to avoid the use of new technology in our science. Can you imagine doing research these days without using the internet or a personal computer? AI is a powerful tool, and the attraction is understandable. Technology that can help create clearer sentences, summarize the published literature, and help uncover relationships between seemingly disparate variables, can be extremely useful.
But there are downsides. We know that AI is not sentient, and it cannot think; not yet anyway. AI cannot pose unanticipated questions, derive new hypotheses, or conceive novel conjectures. AI models can’t originate ideas that have not been articulated, nor can it experience curiosity. And this is what weighs mostly on our minds as editors and educators: the pronounced benefits that come from struggling through the writing process. Writing refines our thinking; however difficult the process can be. As Schimel (2011) observed, clearer writing requires clearer thinking. Similarly, Montgomery (2003) stated, “… writing that teaches and informs without confusion emerges from a process of struggle …”. Writing pushes us to think more deeply, to confront inconsistencies in our explanations, and to dig deeper in our minds in search of scientific understanding. Writing also opens the door for new questions, new hypotheses, and novel approaches. In this sense, human intelligence has yet to be matched.
In this context, the goal of this editorial is to inform the broader community of the new JSR AI Use Policy, which is posted on the journal’s website. The policy has three key components: i) the use of generative AI is permitted, ii) any use of AI must be clearly acknowledged by the authors, and iii) any content and ethical issues resulting from the use of AI are the responsibility of the authors. First and foremost, the policy stresses the importance that JSR authors acknowledge the use of AI, and bear the responsibility for the accuracy of their work. AI programs routinely make up false references and information, a phenomenon known as confabulation (Glynn 2024). And this isn’t surprising given that the prime directive of an AI model is to finish the task, not necessarily to get the right solution. It is, therefore, imperative that authors critically examine AI outputs to ensure accuracy. Ultimately, AI cannot be held responsible for inaccuracies, but authors can be. We also want to caution that overreliance on these tools poses a potential risk to scientific progress and creativity. By fully embracing AI for the very human endeavor of writing about our own research, we may lose valuable insights that can emerge through the writing process itself. We know our science best, we are the ones who will develop new ideas, and we are the ones who claim authorship.
In closing, we hope the new AI policy is viewed as both pragmatic and forward thinking. The central purpose of the policy is not to discourage the use of AI, but rather to encourage authors who choose to use AI to do so in a transparent, responsible, and ethical manner. We believe this approach will both help safeguard JSR’s longstanding reputation as a premier international scientific journal, and ensure the continued trust that the JSR readership has bestowed on the journal since 1931.
Appreciatively,
Steve Kaczmarek & Dustin Sweet (science co-editors)
Melissa Lester (managing editor)