A recent study by Evangelos Pournaras from the School of Computing University of Leeds alerts the scientific community to the ethical challenges and impacts AI language models, like ChatGPT, bring to science and research. This blog highlights the findings of the study, suggesting recommendations for research ethics boards to establish a more responsible research conduct with AI language models.

The Conundrum of AI Language Models in Science

AI language models such as ChatGPT are increasingly being integrated into various scientific tasks. They have powerful capabilities in writing and debugging code, writing, translating, and summarizing text, and generating complex responses to prompts within seconds. However, AI language models are a double-edged sword in scientific research, raising ethical concerns as they may occasionally generate incorrect information, harmful instructions, biased content, or pose a risk to participants and researchers involved.

Epistemological challenges posed by AI language models revolve around questions such as the reliability, transparency, and accuracy of data generated by these models. Researchers need to be cautious when using the output from AI language models for literature reviews, writing papers, collecting data, and performing experiments to avoid compromising research integrity.

The Role of AI Language Models in Research

AI language models, like ChatGPT, can be involved in different aspects of research projects as either research instruments, as in writing papers, or as research subjects, when they are themselves the focus of investigation.

When AI language models serve as research instruments, the interactions between the AI models and the researchers require caution to avoid inducing biases and diminishing critical thinking. Researchers should also be aware that AI language models may produce plausible-sounding but incorrect or inaccurate content.

As research subjects, AI language models pose challenges in terms of explainability, interpretability, and accountability. Researchers and research ethics reviewers need to carefully consider the value of such research and the ethical implications involved.

Digital Assistance by AI Language Models

AI language models are being deployed to assist scientists, research participants, and reviewers, but their potential ethical risks must be addressed. Researchers should inform participants of the AI language models’ limitations, the terms of use, and the potential risks involved in sharing sensitive information. Additionally, communities on research ethics and regulatory bodies should maintain an agreement on the acceptable AI language models for use in research.

Ten Recommendations for Research Ethics Committees

The study suggests ten recommendations for research ethics committees as an open and evolving agenda:

  1. Keep humans accountable for every scientific practice.
  2. Employ an interdisciplinary panel of reviewers for research ethics applications involving AI language models.
  3. Document and report the use of AI language models and their version, prompts, and responses in any phase of the research.
  4. Identify research integrity and ethical risks when AI language models are involved as research instruments or subjects.
  5. Establish new criteria and practices to assess risk levels and mitigation actions in research designs produced with AI language models.
  6. Require researchers to report countermeasures against inaccuracies, biases, and plagiarism.
  7. Encourage research on AI language models with merit and rigorous scientific inquiry.
  8. Implement auditing protocols for input to closed, proprietary AI models.
  9. Inform participants about sharing sensitive information with AI language models and ensure data protection.
  10. Establish and maintain agreed-upon AI language models for research use.

By addressing the recommendations above, research ethics boards can safeguard research integrity, promote responsible science conduct, and minimize ethical risks in the rapidly evolving AI landscape. The widespread adoption of these recommendations will enable the scientific community to harness the power of AI language models while maintaining its commitment to ethical principles.

Original Paper