Abstract
Artificial intelligence (AI), mostly in the form of chatbots based on large language models (LLM), now permeates society at large, whether for positive, negative, trivial or even toxic purposes. AI is also increasingly used in various aspects of scientific research, from large-scale data analysis to manuscript preparation. At Tektonika, the Executive Editor’s group and the Core members engaged in discussions while drafting our guidelines on the use of AI. These exchanges revealed a divide between those who see AI as bringing significant positive contributions and those who are more skeptical, fearing a loss of expertise and a decline in cognitive skills. Some of the key questions that emerged about the benefits of AI are: Do we accept or even encourage the use of AI? At what stages of the research process? What is acceptable or not for manuscript preparation or reviewing? Conversely, several challenges were also identified prompting questions such as: do we just see the use of AI as inevitable, placing us on a damage-limitation exercise? Are we concerned about laziness in research and writing, about false information? Are we able to discern whether or not a manuscript is AI-generated, partially or totally?
| Original language | English |
|---|---|
| Pages (from-to) | I-V |
| Number of pages | 5 |
| Journal | τeκτoniκa |
| Volume | 3 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 25 Jan 2026 |
Bibliographical note
We thank Dave Whipp for reviewing this editorial and providing constructive feedbackKeywords
- AI
- Scientific publishing
- Tektonika
Fingerprint
Dive into the research topics of 'Use of Artificial Intelligence in Academic Research: What Is Acceptable and What is Not'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS