Guidelines for Use of AI

Guidelines for the Ethical and Practical Use of Artificial Intelligence (AI) Tools in Scientific Publishing

Purpose:
This policy aims to ensure the transparent, ethical, and responsible use of Artificial Intelligence (AI) tools in manuscript preparation, peer review, and editorial processes. AI may assist researchers and editors to improve efficiency and accuracy, but scientific judgment and accountability must remain human responsibilities.

1. For Authors

  • AI tools (e.g., ChatGPT, Grammarly, QuillBot, Scite.ai) may be used to improve language quality, structure, reference formatting, or data visualization.

  • Authors must not use AI to fabricate, manipulate, or generate research data, images, or references.

  • Any use of AI that contributes to text generation or analysis must be transparently acknowledged in the Acknowledgment section (e.g., “Language editing assistance was provided using ChatGPT (OpenAI)”).

  • The intellectual content, accuracy, and interpretation of data remain the sole responsibility of the authors.

2. For Reviewers

  • Reviewers may use AI tools to assist in grammar checking, structuring feedback, or organizing their comments, provided the core evaluation and judgment are their own.

  • Reviewers must not share or upload confidential manuscript content into public AI platforms that store user data.

  • Any use of AI should be limited to drafting or rephrasing comments, not making publication decisions or scoring automatically.

  • Final comments and recommendations must reflect the reviewer’s independent expert assessment.

3. For Editors

  • Editors may use AI to screen for plagiarism, grammar, readability, or figure manipulation, and to organize reviewer feedback.

  • AI tools must not be used to make editorial decisions automatically (e.g., accept/reject) without human validation.

  • Editors are responsible for ensuring that any AI use within the editorial workflow complies with confidentiality and data protection standards.

  • The editorial team should periodically review AI-assisted practices to align with best publishing ethics (COPE, Elsevier, Springer, or IEEE guidelines).


4. General Principles

  1. Transparency – Any substantial AI involvement should be declared.

  2. Accountability – Human users retain full responsibility for content accuracy and ethical integrity.

  3. Confidentiality – Manuscript content and reviewer reports must never be exposed to public AI models without explicit consent.

  4. Integrity First – AI tools are assistants, not authors, reviewers, or decision-makers.