Generative AI Policy

1. Purpose

The growing integration of Generative Artificial Intelligence (AI) in academic writing and research processes necessitates the establishment of clear ethical guidelines. This policy aims to promote transparency, uphold scholarly integrity, and maintain responsible publication practices in alignment with Scopus requirements, the standards of the Committee on Publication Ethics (COPE), and other internationally recognized research ethics frameworks.

2. Definition of Generative AI Tools

Generative Artificial Intelligence (AI) tools are defined as software or computational systems capable of automatically producing text, numerical output, code, data simulations, visualizations, or analytic results through machine learning or related algorithms. These tools may include, but are not limited to:

  • Natural language generation platforms such as ChatGPT, Gemini, Claude, and Copilot;
  • AI-assisted statistical computing and modeling tools, including MATLAB AI modules, Python code autogeneration systems, and automated statistical modeling features in R;
  • Generative systems for producing statistical figures, diagrams, or other visual representations, particularly when used to generate non-empirical illustrations.
  1. Authorship and Accountability
  • AI tools cannot be listed as authors and do not qualify for authorship according to Scopus and COPE guidelines.
  • Authors must maintain full responsibility for the research’s content, methodology, interpretation, and conclusions.
  • Any ideas, outputs, or text produced with AI must be critically evaluated, edited, and verified by the authors.
  1. Permitted Uses of Generative AI

Generative AI may be used strictly as a supportive tool in the preparation and refinement of scholarly work submitted to Variance: Journal of Statistics and Its Applications. Acceptable uses include language-related assistance, such as improving grammar, spelling, clarity, and structure, as well as providing translation support without altering the substance or academic meaning of the content.

 

The use of AI for concept exploration, summarization of non-protected or publicly accessible information, or general idea generation is permitted only as an ancillary aid and shall not replace the authors’ original scholarly contribution. Generative AI may also be employed to create non-empirical illustrations, such as conceptual figures or graphical explanations that do not represent real or simulated data, provided that such visuals are clearly labeled and do not misrepresent empirical findings. Under all circumstances, AI usage must remain transparent, ethically compliant, and supportive rather than substitutive of the authors’ intellectual effort and responsibility.

  1. Prohibited Uses of Generative AI

Authors are strictly prohibited from:

  • Generating research results, data, experiments, simulations, or statistical findings solely using AI without full human verification.
  • Using AI to create fabricated references, citations, or invented data.
  • Using AI to produce a manuscript or large portions of text without disclosure.
  • Using AI-generated content that violates copyright or produces plagiarized similarities.
  1. Disclosure and Transparency

If generative AI tools are used in the creation of the manuscript, authors must disclose this usage in a dedicated statement at the end of the manuscript, such as:

AI Usage Disclosure Statement:

Portions of language editing/idea generation/code assistance in this manuscript were supported by [name and version of AI tool], used solely for [e.g., grammar editing, coding assistance]. All analytical results, interpretations, and conclusions were performed and verified by the authors.

Failure to disclose AI use may result in rejection, retraction, or reporting to Scopus/COPE.

  1. Data Integrity and Verification

If AI-assisted code or statistical tools are used:

  • Authors must provide complete reproducible code, dataset access (if applicable), and model descriptions.
  • Generated models must be transparent, reproducible, and scientifically valid.
  • Synthetic data produced by AI must be explicitly labeled as synthetic and justified.
  1. Ethical Compliance and Plagiarism
  • AI-generated content is subject to the journal’s plagiarism screening procedures.
  • Manuscripts with high AI-generated similarity, deceptive paraphrasing, or fabricated citations will be considered ethical misconduct.
  1. Editorial Use of AI
  • The journal editorial board may use AI tools only to improve workflow efficiency (grammar suggestions, text screening, metadata generation).
  • Editorial decisions, peer reviews, acceptance/rejection outcomes will not be made by AI systems.
  1. Policy Enforcement and Sanctions

Violation of this AI policy may result in:

  • Immediate rejection of the manuscript
  • Retraction of published articles
  • Notification to authors’ institutions and/or funding bodies
  • Reporting misconduct to Scopus and COPE

 Effective Date and Review

This policy becomes effective starting January 2026 and will be reviewed annually based on updates to Scopus and global publication ethics.