How artificial intelligence (AI) fuels scientific fraud
Researchers and cybersecurity specialists are raising the alarm: a whole wave of scientific articles produced using artificial intelligence (AI)
Translated from The Epoch Times, an article by Autumn Spredemann
According to them [researchers and cybersecurity specialists], this phenomenon threatens the very credibility of research, by boosting an already old scourge: that of "article factories".
These commercial entities, known as paper mills , mass-produce fake research papers and offer, for a fee, to have researchers' names included in them. AI is now acting as an accelerator of these practices.
For many experts, simply improving detection software is no longer enough: the entire academic system needs to be rethought.
A silent epidemic
The scale of the phenomenon is staggering.
According to Nature Portfolio , more than 10,000 scientific articles were retracted worldwide in 2023.
Borås University in Sweden has observed the proliferation of manuscripts written using large language models (LLMs) in many disciplines and on platforms such as Google Scholar.
A study published by Nature Portfolio shows that tools like ChatGPT, Gemini or Claude can produce articles that look credible and easily pass plagiarism filters.
In May, Diomidis Spinellis, a professor of computer science at the Athens University of Economics and Commerce, published an investigation after discovering that his name appeared without his knowledge in a falsified article in the Global International Journal of Innovative Research .
Of the 53 texts examined, only five showed traces of human writing; the other 48 had a high probability of having been generated by AI.
The risk of manipulation
Swedish researchers have identified over a hundred suspicious articles on Google Scholar. Google declined to comment.
For the team, the danger goes beyond simple fraud: the dissemination of artificial work opens the way to a strategic manipulation of knowledge.
"The risk of what we call evidence hacking increases sharply when these studies circulate on search engines," explains Björn Ekström, co-author of the study.
"Erroneous results can spread, influence public policy, and distort other areas of research."
Even when removed, these articles exacerbate the overload on the peer review system, which is already under constant strain.
"The flood of artificial studies will have particularly serious consequences in areas that affect humans," warns Nishanshi Shukla, an AI ethics researcher at Western Governors University.
She points out that AI does not replace critical analysis:
“When research is entirely produced by machines, the risk is a homogenization of knowledge. In the short term, all studies tend to resemble each other, reproducing the same biases and blind spots. In the long term, knowledge production becomes a closed circle, devoid of genuine human thought.”
A science drowned in noise
Michal Prywata, co-founder of the AI company Vertus, compares the current situation to a denial-of-service attack:
"Real researchers are drowning in the noise, reviewers are overwhelmed, and citations are being filled with fabricated references. Genuine scientific progress is becoming harder to distinguish."
He points out that language models do not "think":
"These are pattern recognition systems, extremely effective at producing text that appears coherent—exactly what's needed to give the illusion of a credible article."
Nathan Wenzler, director of IT security at Optiv Security, is also concerned about the erosion of trust:
"As bogus studies appear in respected journals, the credibility of research erodes."
According to him, universities face an additional threat: intellectual property theft.
"Cyberattacks are now directly targeting academic work, which some states then reuse as if they were the authors."
The pressure to publish, the driving force behind the drift
For Nishanshi Shukla, the root cause lies in the race to publish:
"When a researcher's career depends on the number of articles and citations, the temptation to use AI becomes strong."
The International Science Council agrees: the "relentless pressure" to publish fuels fraud and lowers standards.
"If nothing changes, research risks losing rigor, particularly in crucial areas such as medicine, technology, and climate," the organization warns.
Michal Prywata adds that this falsified data could then shape the AI models themselves, creating a loop of misinformation.
"There need to be real consequences."
Changing the incentives
According to him, the problem will not be solved with better AI detectors:
"It's a losing battle. Tools have already been designed to fool the detectors."
He calls for an overhaul of the incentive system:
"Stop rewarding the quantity of articles. Fund based on the quality of citations and real impact; and hold institutions accountable for the work published under their name."
Towards a renewed scientific reinterpretation
Peer review remains the gold standard, but it's losing momentum: overload, lack of time, chronic fatigue… and now, the intrusion of AI.
Michal Prywata argues for a profound reform:
"Make the proofreading process transparent, identify the proofreaders, and compensate them for their work."
"We must stop relying on volunteers to ensure the quality of research," he concluded.




AI has a garbage in, garbage out problem.
Great article; thanks for sharing!