Tools like ChatGPT threaten open science; Here are the basic rules for their use.

The Chatgpt Website, An Example Of An Ai Chatbot, Is Displayed On A Smartphone On The Openai Website.

ChatGPT threatens the transparency of science-based methods.Credit: Tada Images / Shutterstock

It has become clear that artificial intelligence (AI) is gaining the ability to generate fluent language by extracting sentences that are very difficult to distinguish from human written text. last year, Nature He reports that some scientists are using chatbots as research assistants – to organize their thinking, provide feedback on their work, write code, and summarize research articles (Nature 611, 192-193; 2022)

But the release of AI chatbot ChatGPT in November brought the potential of tools known as large linguistic models (LLMs) to a wider audience. The developers, OpenAI in San Francisco, California, have made the chatbot free to use and easily accessible to non-technical people. Millions of people are using it, and the result has been an explosion of fun and sometimes terrifying texting experiments fueling growing excitement and awe at these devices.

ChatGPT can write presentable student essays, summarize research papers, answer enough questions to pass medical exams, and generate helpful computer code. He produced research summaries until scientists could hardly tell if a computer had written them. Worryingly for society, it can make it easy to generate spam, ransomware and other malicious products. Although OpenAI tries to put guardrails on what chatbots do, users are already finding ways around them.

A major concern in the research community is that students and scientists may pass off LLM-authored material as their own, or use LLMs in a simplistic manner (eg, conducting an incomplete literature review) and produce unreliable work. Several preprints and published articles credit ChatGPT as the standard author.

That is why it is high time that researchers and publishers establish ground rules about ethical use of LLM. NatureWe have developed the following two principles, which have been added to our existing guidelines for authors with all Springer Nature journals (see go.nature.com/3j1jxsw). as a NatureAs the news team reports, other scientific publishers may take a similar stance.

First, no LL.M. A device is not acknowledged as an acknowledged author on a research paper. Because any authorship profile entails responsibility for the work and AI tools cannot take this responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgments sections. If a paper does not include these sections, the introduction or other appropriate section may be used to document the use of the LLM.

Pattern recognition

Can editors and publishers access material produced by LLMs? Now, the answer is ‘maybe’. The raw output of Chat GPT can be discerned upon careful examination, especially when more than a few paragraphs are involved and the subject matter is related to scientific work. This is because LLMs develop word patterns based on statistical associations in their training data and the questions they see, which means their output may appear vague and general or contain simple errors. Moreover, they cannot yet cite sources to document their results.

But in the future, AI researchers may be able to solve these problems — some experiments connecting chatbots with resource-based tools, for example, and others training chatbots on specific scientific texts.

Some tools promise to detect LLM-generated output, and NatureA publisher, Spring Nature, is among those building technologies to do just that. But LLMs are improving, and fast. The LL.M. There are hopes that inventors will be able to mark the output of their devices in some way, although this may not be technically foolproof.

Science has always operated by being open and transparent about methods and evidence, regardless of what technology is in vogue. Researchers must ask themselves how the transparency and integrity underlying the knowledge generation process can be maintained if they or their colleagues use software that operates in an inherently opaque manner.

that’s why Nature He is setting these principles: Finally, research must have transparency in methodology and honesty and truthfulness from authors. This, after all, is the foundation on which science depends to advance.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences