stub AI in Scientific Research: Productivity Gains vs Quality Risks – Securities.io
Connect with us

Artificial Intelligence

AI in Scientific Research: Productivity Gains vs Quality Risks

mm

Securities.io maintains rigorous editorial standards and may receive compensation from reviewed links. We are not a registered investment adviser and this is not investment advice. Please view our affiliate disclosure.

AI As A Research Assistant

AI is a true revolution for many scientific fields, allowing the processing of data and modeling of real-life materials and situations in a way even the most powerful supercomputers could not achieve just a few years ago.

Recent examples include diverse forms of AI being used for:

These applications usually rely on highly specialized AI models, finely trained to examine a specific class of crystal or process a unique set of images.

However, when we talk about AI, the broader public usually thinks of generalist LLMs (Large Language Models). These are currently used mostly for writing and improving text, as well as performing advanced, readable queries compared to traditional search engines.

In theory, this should apply not just to students’ essays, bad poetry, and PowerPoint presentations, but also to scientific research and published papers.

This, however, can be a double-edged sword, as explained in a recent analysis published in the prestigious scientific review Science1, titled “Scientific production in the era of large language models”.

In this analysis, researchers at the University of California and Cornell University observed the output of scientists utilizing LLMs compared to their previous work. They discovered that while using LLMs can improve the quality of scientific papers, it also creates a higher volume of lower-quality research, exacerbating existing problems in academia.

Summary

AI is rapidly reshaping scientific research by accelerating writing, discovery, and productivity. However, the same tools risk flooding academia with lower-quality research, challenging traditional evaluation metrics and peer review systems.

Detecting AI Use in Scientific Research Papers

The first challenge is determining how prevalent LLM usage is in scientific writing and who is using it.

Unsurprisingly, this is not data that researchers spontaneously admit to, as the tools are still new and can be error-prone, especially regarding technical data or niche topics.

The researchers compiled more than 2 million papers from large scientific databases like arXiv, bioRxiv, and the Social Science Research Network (SSRN), dated from January 2018 through June 2024.

They then compared papers posted before 2023—presumed to be written by humans—against AI-generated text.

Using this data, they developed a model to detect AI usage. With this tool, they determined with reasonable accuracy which scientists are using LLMs and when they began. They then tracked the publication volume of those scientists before and after adopting the tools, and whether those papers were subsequently accepted by scientific journals.

AI Impact On Scientific Research

Swipe to scroll →

AI Impact Area Positive Effect Risk
Paper Writing Improved clarity and speed Higher volume of low-quality output
Literature Discovery Broader, newer research exposure Bias toward recent or uncited work
Academic Careers Higher productivity metrics Metrics decouple from real skill

Higher Productivity

The first conclusion is that using LLMs boosts scientists’ productivity, at least when measured by the number of publications.

On arXiv, scientists flagged as using LLMs posted roughly one-third more papers than those who did not appear to use AI. On bioRxiv and SSRN, the increase exceeded 50%.

Given that the “publish or perish” culture dictates the career paths of most scientists, this volume increase has a serious impact on career trajectories.

Another insight is that the boost was stronger for scientists assumed to be non-native English speakers.

For example, researchers affiliated with Asian institutions posted between 43.0% and 89.3% more papers after the detector suggested they began using LLMs.

This makes sense; many scientists are technically brilliant and capable of reading English (a requirement in modern academia) but may struggle to construct clear, elegant sentences in a second language.

Widespread use of LLMs could level the playing field for non-native speakers, helping high-quality research gain international recognition regardless of the author’s linguistic fluency.

Better Discovery Of Scientific Knowledge

LLMs can also be used to find papers relevant to a specific topic, utilizing specialized AIs like Elicit, ResearchRabbit, or Scite.

A significant portion of scientific research consists of finding and reading other papers to deduce information or identify experimental protocols that can be reused in new contexts.

AIs generally favor more recent papers and place less weight on citation counts compared to traditional search engines. As such, they provide an alternative for scientists looking for new ideas or less-discussed experiments.

“People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas.”

Keigo Kusumegi, a doctoral student at Cornell University

This hypothesis could be tested in the future by checking if papers written with AI assistance possess more diverse bibliographies or are more innovative and interdisciplinary.

AI As A New Issue In Science & Academia

In recent years, scientific research—especially within the social sciences—has experienced a crisis of replicability.

Because the results of many papers cannot be reproduced by other researchers, otherwise serious-looking studies may be flawed or even fraudulent. This has been described as an “existential crisis for science.”

Historically, complex writing—including longer sentences and sophisticated vocabulary—has been a heuristic for higher-quality research. While not foolproof, it helped distinguish expertly written research from shoddy analysis.

In contrast, papers written with AI assistance are currently less likely to be accepted by journals.

Overall, this threatens to further decouple the metric of “papers published” from the actual talent of a researcher. Editors and reviewers may struggle to identify the most valuable submissions, especially as AI becomes increasingly efficient and human-like.

Lastly, massive volumes of “slop”—bogus but plausible-looking research papers—could be generated via AI. This risk is not limited to social media; it is a significant problem for scientific research, where reviewers’ time was already a scarce commodity before the emergence of LLMs.

What AI Means for the Future of Scientific Research

Because AI is a tool, researchers must learn to use it effectively. It is nearly impossible to ban LLMs from research labs, and detection will only become harder.

Adaptation and the productive use of AI in scientific writing will be the defining topic moving forward.

“Already now, the question is not, ‘Have you used AI?’ The question is, ‘How exactly have you used AI, and was it helpful?'”

Hiring practices in science may benefit from a return to qualitative metrics, such as in-depth interviews and technical tests, rather than relying solely on publication volume.

Similarly, reviewers and scientific journals must adapt. Potentially, systems that verify if a submission originates from a legitimate research lab before analysis may be required to block the mass production of fake papers.

Ultimately, a deep understanding of the technical elements of a paper, rather than linguistic elegance, will become the foremost element in judging quality.

Investing in AI Innovation

Investor Takeaway

AI-driven research productivity may not translate directly into higher-quality outcomes. Long-term winners will be companies enabling compute, infrastructure, and validation—not just content generation. Nvidia remains central to this thesis.

Nvidia

Nvidia has evolved from a graphics card company targeting gamers to the world’s largest company, thanks to its central role in providing AI hardware to the entire tech industry.

As a pioneer in AI-dedicated hardware, Nvidia was the first to help researchers leverage these tools. “CUDA,” a general-purpose programming interface for Nvidia’s GPUs, opened the door for uses beyond gaming, paving the road for today’s AI applications.

“Researchers realized that by buying this gaming card called GeForce, you add it to your computer, you essentially have a personal supercomputer.

Molecular dynamics, seismic processing, CT reconstruction, image processing—a whole bunch of different things.”

Jensen Huang, in an interview with Sequoia

It is likely that Nvidia hardware, either directly or incorporated into the clouds of Microsoft, Google, Meta, and OpenAI, will remain the hardware of choice for researchers.

AI capex is expected to reach as much as $200B in 2025, on top of ever-growing cumulative spending by the largest tech companies. Other electronic components, such as high-performance RAM, are now in shortage as Nvidia chip production ramps up.

While scientific research may not represent the bulk of AI compute compared to consumer or B2B uses, it could become the most impactful long-term driver, promising new alloys, medicines, and scientific methodologies.

(You can read more about Nvidia’s history, current business, and future prospects in our dedicated investment report on the company.)

Study Referenced

1. Keigo Kusumegi, Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, Toby Stuart, and Yian Yin. Scientific production in the era of large language models. Science. 18 Dec 2025. Vol 390, Issue 6779 pp. 1240-1243. DOI: 10.1126/science.adw3000

Jonathan is a former biochemist researcher who worked in genetic analysis and clinical trials. He is now a stock analyst and finance writer with a focus on innovation, market cycles and geopolitics in his publication 'The Eurasian Century".

Advertiser Disclosure: Securities.io is committed to rigorous editorial standards to provide our readers with accurate reviews and ratings. We may receive compensation when you click on links to products we reviewed.

ESMA: CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. Between 74-89% of retail investor accounts lose money when trading CFDs. You should consider whether you understand how CFDs work and whether you can afford to take the high risk of losing your money.

Investment advice disclaimer: The information contained on this website is provided for educational purposes, and does not constitute investment advice.

Trading Risk Disclaimer: There is a very high degree of risk involved in trading securities. Trading in any type of financial product including forex, CFDs, stocks, and cryptocurrencies.

This risk is higher with Cryptocurrencies due to markets being decentralized and non-regulated. You should be aware that you may lose a significant portion of your portfolio.

Securities.io is not a registered broker, analyst, or investment advisor.