Generative AI Policy

This policy sets out the standards for the responsible, transparent and ethically grounded use of generative artificial intelligence (AI) by authors, reviewers, and editors. It is aligned with the principles of COPE, the practices of WAME, as well as the requirements of academic integrity established by the Law of Ukraine “On Education”.

 

Use of AI by authors

Generative AI systems may be used by authors solely as auxiliary tools and must not replace the researcher’s intellectual contribution or produce scientific content on their behalf.

Permitted uses:

  • linguistic and stylistic editing, including translation;
  • creation of basic visual materials (diagrams, illustrations) that do not influence the interpretation of results;
  • preparation of draft texts that are subsequently substantially revised by the author.

Unacceptable practices:

  • generating scientific content, conclusions, or interpretations of results instead of the author;
  • using fabricated data, references, citations or statistics produced by AI;
  • presenting AI-generated material as the author’s own work without full and accurate disclosure.

Authors bear full responsibility for the reliability, accuracy and legality of all data submitted, regardless of whether AI was used in their preparation.

Transparency and disclosure

In accordance with COPE recommendations, any use of AI must be clearly disclosed in the article, preferably in the “Materials and Methods” or “Acknowledgements” section. The statement should specify:

  • the name of the AI tool;
  • its version or configuration;
  • the tasks for which it was used.

Non-transparent or concealed use of AI is considered a breach of publication ethics.

 

Use of AI by reviewers and editors

COPE emphasise that reviewers and editors work with confidential materials; therefore:

Reviewers are permitted to:

  • use AI solely for language editing of their own review text;
  • use technical tools for checking reference formatting or basic bibliographic information.

Reviewers are prohibited from:

  • uploading the manuscript (in whole or in part) to any external AI services;
  • generating reviews, recommendations, or conclusions using generative AI models;
  • using AI to analyse manuscript content, compare submissions, or produce expert assessments.

Editors are responsible for ensuring compliance and may:

  • request clarification regarding the use of AI;
  • initiate additional checks if undeclared AI use is suspected;
  • reject a manuscript if the policy is violated or if the reliability of the content is in doubt.

 

Content created or modified using AI

If an author includes text, images, tables, or any other elements created or modified with AI, then:

  • this must be clearly stated;
  • the name, developer, and version of the tool must be specified;
  • the author must confirm the accuracy and reliability of all AI-produced elements.

Fabricated, artificial, or unverifiable data constitute grounds for rejection or post-publication retraction.

 

Compliance with academic integrity

All materials must comply with:

  • the Law of Ukraine “On Education” (provisions on academic integrity);
  • the requirements of the Ministry of Education and Science of Ukraine regarding plagiarism and falsification;
  • the COPE Core Practices;
  • WAME and Elsevier editorial policies.

Authors must uphold copyright and confidentiality and must not use personal data without lawful justification.