General purpose and scope
This policy sets out the principles for the ethical and responsible use of generative artificial intelligence (AI) by all participants in the publishing process – authors, reviewers, editors, and members of the journal’s staff. Its aim is to ensure academic integrity, transparency, and the reliability of research results in accordance with international standards and the provisions of the Law of Ukraine “On Education”.
Use of AI by authors
Authors may employ generative AI tools (such as ChatGPT, Gemini, Copilot, DALL·E, Midjourney, etc.) only as supportive aids and not as substitutes for original scholarly contribution. Permitted use includes:
The following practices are prohibited:
Transparency of AI use
Any use of AI must be clearly disclosed within the article (in the “Acknowledgements” or “Materials and Methods” sections).
The author bears full responsibility for the accuracy of all information, regardless of whether AI contributed to its creation.
Use of AI by reviewers and editors
Reviewers may use AI only for minor linguistic or technical tasks (e.g., checking bibliographic formatting).
Using AI to produce the review itself is strictly prohibited.
Editors monitor compliance with this policy and may request clarification or reject a submission if violations are identified.
Content created or modified by AI
If any text, images, tables, or other elements created or edited using AI are included in an article, authors must specify:
Submissions containing fabricated or unreliable data will be rejected or retracted.
Academic integrity
AI use must comply with:
Authors must respect copyright and must not disclose confidential or personal data without permission.
Sanctions
Violation of this policy may result in:
Policy updates
This policy is reviewed annually and updated in line with technological developments and evolving international publishing standards.