Generative Artificial Intelligence (AI) Policy

The International Journal of Electrical and Computer Engineering Research (IJECER) recognizes the increasing use of generative artificial intelligence (AI) tools in academic research and scientific publishing. The journal supports responsible and transparent use of AI technologies while maintaining the highest standards of research integrity, academic responsibility, and publication ethics.

1. Use of AI by Authors

Authors may use generative AI tools to assist with language editing, grammar improvement, or formatting during manuscript preparation. However, such tools must not replace the authors’ intellectual contributions or critical scientific analysis.

Authors remain fully responsible for the originality, accuracy, and integrity of the manuscript. Any content generated or assisted by AI must be carefully reviewed and verified by the authors.

If generative AI tools are used in the preparation of the manuscript, authors must disclose this use in a separate statement within the manuscript. The statement should include:

  • The name of the AI tool used
  • The purpose of its use
  • The extent of human review and editing

2. Authorship and Accountability

Generative AI tools cannot be listed as authors or co-authors. Authorship implies intellectual responsibility, accountability, and the ability to approve the final version of the manuscript, which can only be performed by human contributors.

All listed authors are responsible for ensuring that the work is original, accurate, and complies with the journal’s ethical standards.

3. Use of AI in Figures, Images, and Illustrations

The use of generative AI tools to create or substantially modify figures, images, or graphical content is generally not permitted unless it is part of the research methodology. If AI-assisted tools are used as part of the research process, the methodology must clearly describe:

  • The AI model or tool used
  • The purpose of the AI-assisted process
  • The parameters or methods applied

4. Use of AI by Reviewers

Peer reviewers must treat all manuscripts as confidential documents. Reviewers should not upload any part of a manuscript into generative AI tools, as this may violate confidentiality, intellectual property rights, or data protection principles.

The scientific evaluation of manuscripts must be conducted solely by human reviewers. Generative AI tools should not be used to generate peer review reports or to assist in the evaluation of manuscripts.

5. Use of AI by Editors

Editors must ensure that the editorial decision-making process remains independent and based on human expertise. Editors should not upload submitted manuscripts or confidential editorial correspondence into generative AI systems.

AI tools may be used only for limited technical purposes, such as plagiarism detection, language support, or technical checks, provided that author confidentiality is preserved.

6. Transparency and Disclosure

Transparency in the use of AI tools is essential to maintain trust in scholarly communication. Authors must clearly disclose any use of generative AI in manuscript preparation or research methodology.

Failure to disclose the use of AI tools or misuse of AI technologies may be considered a form of research misconduct and may lead to rejection, correction, or retraction in accordance with the journal’s Publication Ethics and Malpractice Statement.

7. Policy Updates

IJECER will continue to monitor developments in generative AI technologies and may update this policy to reflect evolving best practices in scholarly publishing.