Generative AI Policy

These policies were originally introduced in response to the emergence of generative AI and AI-assisted technologies, whose use among researchers has grown rapidly. The updated version reflects current best practices and aims to strengthen transparency and provide clear guidance for authors, reviewers, editors, readers, and other contributors. JADS team will continue to observe advancements in this field and refine these policies as needed.

For Authors

Use of Generative AI and AI-Assisted Technologies in Manuscript Preparation

JADS acknowledges the value of generative AI and AI-assisted technologies (“AI Tools”) when applied responsibly. These tools can help researchers improve productivity, gain insights faster, and achieve more efficient outcomes. Increasingly, AI systems such as AI agents and research assistants support scholars in synthesizing complex literature, mapping research landscapes, identifying gaps, generating new ideas, and improving writing quality and structure.

Authors submitting manuscripts to this journals may use AI Tools as supportive aids, but such tools must never replace human critical judgment, expertise, or analytical reasoning. AI should always be employed under direct human supervision and evaluation. Ultimately, authors retain full responsibility for their manuscripts. This includes:

  1. Ensuring the accuracy, completeness, and objectivity of all AI-generated content, and verifying cited sources, since AI-generated references may be fabricated or incorrect.

  2. Editing, refining, and contextualizing AI-assisted text so that the manuscript authentically reflects the author’s own intellectual contributions, interpretations, and perspectives.

  3. Transparently disclosing the use of any AI Tools during manuscript preparation through an AI declaration upon submission.

  4. Safeguarding data privacy, intellectual property, and proprietary rights by adhering to the terms of service of any AI Tools used.

Responsible Use of AI Tools

Authors must review the terms and conditions of any AI Tool before use to ensure the protection and confidentiality of all submitted materials, including unpublished manuscripts and sensitive data. Particular attention must be paid to personally identifiable information. AI-generated images that reproduce copyrighted materials, depict real individuals, or contain identifiable products or brands are strictly prohibited, as are synthetic representations of voices. Authors must check all AI outputs for factual accuracy and potential bias.

Authors should ensure that any tool they use does not claim ownership or secondary rights (such as the right to train on input data) beyond what is necessary to provide the service. Similarly, tools should not impose restrictions that could prevent subsequent publication of the article.

Disclosure Requirements

Authors are required to clearly disclose the use of AI Tools during manuscript preparation in a dedicated AI declaration section upon submission. The disclosure must include the tool’s name, purpose of use, and the extent of human oversight involved. This transparency fosters trust among authors, reviewers, editors, and readers and ensures compliance with tool-specific usage terms.

Minor stylistic assistance, such as grammar or punctuation correction, does not require disclosure. However, if AI Tools are used within the research process itself, their role must be fully detailed within the Methods section of the paper.

Authorship Policy

AI systems must not be listed as authors or co-authors. Authorship entails human accountability and responsibility that AI cannot fulfill. Each author must ensure the scientific integrity and originality of the work, approve the final version before submission, and take responsibility for addressing questions related to data accuracy or validity.

Authors must also confirm that the work is original, that all contributors qualify for authorship, and that the manuscript respects third-party rights. All authors should review JADS Ethics in Publishing guidelines prior to submission.

Use of Generative AI in Figures, Images, and Artwork

The use of generative AI or AI-assisted tools to create or manipulate images in submitted manuscripts is strictly prohibited. This includes actions such as enhancing, removing, obscuring, or altering specific image elements. Adjustments to brightness, contrast, or color are acceptable only when they do not obscure or distort the original data. To ensure integrity, forensic analysis may be applied to detect image manipulation.

An exception applies only if AI-assisted methods are part of the research design or data analysis for instance, AI-based image generation or interpretation in biomedical imaging. In such cases, authors must describe the tools used in detail (including model name, version, and manufacturer) within the Methods section, ensuring reproducibility and appropriate attribution. Editors may request original or unaltered image files for verification.

AI-generated artwork for graphical abstracts is not allowed. However, the use of AI-generated cover art may be permitted only with prior approval from the journal editor and publisher, provided all rights are cleared and proper attribution is given.

For Reviewers

Use of Generative AI and AI-Assisted Technologies in Peer Review

When invited to review a manuscript, reviewers must treat all submitted materials as strictly confidential. Under no circumstances should a reviewer upload the manuscript or any portion of it into a generative AI system, as doing so could compromise author confidentiality, intellectual property, or, where applicable, data privacy, particularly if personally identifiable information is involved.

This confidentiality obligation extends to the peer review report itself. Reviewers are not permitted to use generative AI tools to draft, edit, or enhance their review reports, even for stylistic or linguistic purposes. Both the manuscript and the review report must remain under the reviewer’s sole and secure management throughout the review process.

Peer review is a cornerstone of scholarly publishing, and JADS upholds the highest standards of integrity in this process. Evaluating a scientific manuscript requires human expertise, independent reasoning, and critical judgment responsibilities that cannot be delegated to AI tools. Generative AI technologies lack the contextual understanding necessary to evaluate originality, methodological soundness, or scientific contribution, and their use may result in inaccurate or biased assessments. Reviewers therefore bear full accountability for the substance, accuracy, and fairness of their reports.

In line with JADS AI policy for authors, reviewers may encounter a disclosure statement within the manuscript indicating whether the authors used generative AI or AI-assisted tools during preparation. This disclosure, located before the reference section, ensures transparency and informs reviewers of any AI involvement that was properly declared.

JADS also employs secure, identity-protected AI-assisted technologies consistent with the RELX Responsible AI Principles. These systems may be used internally to support administrative processes such as plagiarism screening, manuscript completeness checks, and reviewer matching. Such tools operate within strict confidentiality protocols, undergo continuous evaluation for bias, and comply with all relevant data privacy and security regulations.

JADS remains committed to responsibly integrating AI-driven systems that assist reviewers and editors in managing workflows, while maintaining the ethical principles of confidentiality, data protection, and human oversight across all stages of the publication process.

For Editors

Use of Generative AI and AI-Assisted Technologies in the Editorial Process

All submitted manuscripts must be handled as strictly confidential documents. Editors are prohibited from uploading any part of a manuscript into generative AI tools, as doing so may compromise the confidentiality, proprietary rights, or intellectual property of the authors. If the manuscript contains personally identifiable information, such actions could also breach data protection and privacy regulations.

This confidentiality rule applies equally to all editorial correspondence, including decision letters, review invitations, or communication with authors and reviewers. Editors must not use generative AI tools to draft or refine such communications, even for stylistic or language improvement purposes, as these materials often contain confidential information about the manuscript and its contributors.

The peer review and editorial evaluation process form the core of scientific publishing integrity. Managing this process requires human expertise, discernment, and accountability—qualities beyond the scope of generative AI systems. Editors must not rely on AI tools to assist in evaluating submissions or making editorial decisions, as AI-generated analyses may introduce factual errors, incomplete judgments, or biased recommendations. The handling editor bears full responsibility for ensuring a fair, transparent, and ethical editorial process, as well as for the accuracy and appropriateness of the final decision communicated to the authors.

According to JADS AI author policy, authors may use generative AI or AI-assisted technologies during manuscript preparation, provided that such use is transparent, responsibly managed, and disclosed in accordance with JADS Guide for Authors. Editors can review this disclosure statement at the end of the manuscript, prior to the reference section. Should an editor suspect any violation of these AI use policies by authors or reviewers, they are required to notify the publisher for further investigation.

JADS employs identity-protected, in-house, and licensed AI-assisted systems that comply with the RELX Responsible AI Principles. These tools are used in limited administrative capacities such as verifying manuscript completeness, performing plagiarism checks, and identifying qualified reviewers while ensuring author and reviewer confidentiality. All AI tools used by JADS are rigorously tested for bias and adhere to international standards of data privacy and security.

JADS remains committed to integrating AI-driven innovations that responsibly support the editorial and peer review process. Any AI technology adopted within this framework must always preserve the confidentiality, ethical responsibility, and human oversight that define the integrity of scientific publishing.

Publication Process

JADS Use of AI and AI-Assisted Technologies in the Publication Workflow

As part of JADS continuous effort to enhance the publishing experience for authors, reviewers, and editors, we actively adopt innovative technologies that streamline the publication process. Our primary objective is to utilize advanced AI-driven systems to assist experts in maintaining rigorous publication standards, ensuring research integrity, and upholding the trustworthiness of all published content.

Current Applications of AI in the Publication Process

AI technologies are selectively integrated throughout the editorial and production workflow to support, not replace, human expertise. These applications include:

Reviewer Identification and Manuscript Matching: Editors are provided with AI-assisted tools that help identify suitable reviewers, assess submission relevance to journal scope, and detect potential duplicate submissions across platforms.

Author Support Tools: Authors have access to AI-based utilities such as Journal Finder, which recommends appropriate journals for their research. When a submission is not accepted by a chosen journal, JADS Article Transfer Service may use expert-driven algorithms to suggest alternative publication venues that align with the manuscript’s focus.

Editorial and Technical Checks: AI systems are used to perform preliminary technical validations ensuring compliance with submission requirements, format consistency, and manuscript completeness before review.

Research Integrity Assessment: Automated tools assist in detecting ethical concerns such as plagiarism, image manipulation, or data inconsistencies, thereby reinforcing adherence to JADS publishing ethics and integrity policies.

Post-Acceptance Support: After acceptance, AI tools aid in proofreading, copy editing, and layout verification, helping to identify stylistic inconsistencies, typographical errors, or factual discrepancies prior to final publication.

Commitment to Human Oversight and Ethical Standards

While AI technologies enhance efficiency and precision in publication workflows, JADS emphasizes that all editorial and publication decisions remain under direct human supervision. AI tools serve as assistive systems that empower editors, reviewers, and production teams to make informed, evidence-based decisions while preserving the critical human judgment that defines academic publishing.

JADS remains committed to the ethical and transparent use of AI, ensuring that all implemented systems respect confidentiality, mitigate algorithmic bias, and comply with global standards for data security and responsible technology deployment.