Policy on the use of artificial intelligence technologies
The journal's policy on the use of artificial intelligence (AI) tools is based on the statements of COPE, WAME, JAMA Network, ICMJE recommendations, the requirements of the European Law on Artificial Intelligence, the Concept of the Development of Artificial Intelligence in Ukraine, and the Law of Ukraine "On Academic Integrity".
The growing popularity of generative artificial intelligence and technologies supported by artificial intelligence, which are expected to be used by authors of publications more and more often, has led to the creation of an appropriate policy on the use of AI by the journal's editorial board. These rules are aimed at ensuring greater transparency and improving the quality of publications for authors, reviewers, editors, and readers. The editorial board will monitor the development of AI technologies and adjust or improve these rules.
For Authors
If authors use generative AI and AI-enabled technologies in their writing, these technologies should only be used to improve readability and correct grammatical errors in the work. The use of AI technology should be under human supervision and control, and authors should carefully review and edit the output, as AI can produce authoritative results that may be incorrect, incomplete, or biased. Authors are ultimately responsible for the content of the work.
Authors should disclose in their manuscripts the use of AI and AI-enabled technologies. The use of AI should be acknowledged in the published work. Disclosure of the use of these technologies supports transparency and trust between authors, readers, reviewers, and editors, and promotes compliance with the terms of use of the relevant tool or technology.
Authors should not identify AI and AI-enabled technologies as authors or co-authors, nor cite AI as authors. Authorship involves duties and tasks that can only be assigned and performed by humans. Each (co)author is responsible for ensuring that issues related to the accuracy or integrity of any part of the work are properly investigated and resolved, and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original and that the work does not infringe the rights of third parties. All authors should review our publication ethics policy before submitting.
Use of Generative AI and AI Tools in Figures, Images, and Illustrations
We do not allow the use of Generative AI or AI Tools to create or modify images in submitted manuscripts. This may include enhancing, darkening, moving, removing, or adding a feature to an image or drawing. Adjusting brightness, contrast, or color balance is acceptable as long as it does not obscure or eliminate any information present in the original.
The only exception is when the use of AI or AI tools is part of the research design or research methods (for example, in AI-based visualization approaches to generate or interpret basic research data, such as in the field of biomedical imaging). If this is done, such use should be acknowledged and described appropriately in the text of the manuscript. This should include an explanation of how AI or AI tools were used in the process of generating or modifying the image, as well as the model or tool name, version and extension number, and manufacturer. Authors should follow the specific rules for using AI-based software and ensure proper attribution of content. Where possible, authors may be asked to provide pre-AI-corrected versions of images and/or composite raw images used to create the final submitted versions for editorial review.
The use of generative AI or AI-powered tools in the creation of artwork, such as graphic annotations, is prohibited. The use of generative AI in the creation of covers may be permitted in some cases if the author obtains prior permission from the editor and publisher of the journal, can demonstrate that all necessary rights to use the relevant material have been obtained, and ensures proper attribution of the content.
For reviewers
When a researcher is invited to review another researcher’s article, the manuscript should be treated as a confidential document. Reviewers should not upload the submitted manuscript or any part of it to a generative AI tool, as this may violate the confidentiality and proprietary rights of the authors, and if the article contains personal information, may violate data privacy rights.
This confidentiality requirement extends to the reviewer’s report (review), as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their review to an AI tool, even if it is only for the purpose of correcting grammatical errors and readability.
Reviewing is the foundation of the scientific ecosystem and the Editorial Board adheres to the highest standards of integrity in this process. Reviewing a scientific manuscript involves a responsibility that can only be assigned to humans. Reviewers should not use generative AI or AI-enabled technologies to assist in the scientific review of a paper, as the critical thinking and original evaluation required for review are beyond the scope of this technology, and there is a risk that this technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The reviewer is responsible for the content of the review.
The Editorial Board’s AI-enabled Author Policy states that authors are permitted to use generative AI and AI-enabled technologies in the writing process prior to submission, but only to improve the readability and correct grammatical errors of their paper, and with appropriate disclosure.
For Editors
The submitted manuscript should be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative artificial intelligence tool, as this may violate the confidentiality and proprietary rights of the authors and, if the article contains personal information, may violate data privacy rights.
This confidentiality requirement applies to all communications about the manuscript, including any decision messages or letters, as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an artificial intelligence tool, even if it is only for the purpose of improving language and readability.
Managing the editorial review of a scientific manuscript involves a responsibility that can only be assigned to humans. Generative AI or AI-powered technologies should not be used by editors to assist in the process of evaluating or making decisions about a manuscript, as the critical thinking and original judgment required for this work are beyond the scope of this technology, and there is a risk that the technology will lead to incorrect, incomplete, or biased conclusions about the manuscript. The editor is responsible for the editorial process, the final decision, and the communication of it to the authors.
The Editorial Board’s Author Policy states that authors are permitted to use generative AI and AI-powered technologies in the writing process prior to submission, but only to improve the readability and correct grammatical errors of their article and with appropriate disclosure. If an editor suspects that an author or reviewer has violated the AI Policy, they should report this to the Editorial Board.