Now researchers must tame it in order to change science


Machine-Learning vs. Generative AI Tools: What do they think about the rise of Artificial Intelligence in Science and How does it Help?

Nature asked researchers about their views on the rise of Artificial intelligence in science, including machine- learning and generative tools.

Around the world, artificial intelligence is being used in science education. Students at schools and universities regularly use LLM tools to answer questions, and teachers are starting to recognize that curricula and methods of pedagogy will need to change to take this into account.

Some scientists were unimpressed by the output of LLMs. One researcher who uses the LLM to help copy-edit papers wrote that it felt like he had copiedall the bad writing habits of humans. Although some were excited by the potential of LLMs for summarizing data into narratives, others had a negative reaction. Johannes Niskanen, a physicist at the University of Turku in Finland wrote that science will move from’for humans by humans’ to’for machines by machines’ if we use computers to read and write.

The artificial intelligence has also been changing. Whereas the 2010s saw a boom in the development of machine-learning algorithms that can help to discern patterns in huge, complex scientific data sets, the 2020s have ushered in a new age of generative AI tools pre-trained on vast data sets that have much more transformative potential.

Researchers focused first on machine-learning, and picked out many ways the tools can help their work. More than 50% of people mentioned the benefits of artificial intelligence, including that it provides faster ways to process data, and that it speeds up computations that were not previously feasible.

“AI has enabled me to make progress in answering biological questions where progress was previously infeasible,” said Irene Kaplow, a computational biologist at Duke University in Durham, North Carolina.

“The main problem is that AI is challenging our existing standards for proof and truth,” said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.

Respondents mentioned that they were concerned about faking studies, false information and perpetuating bias if the tools were trained on historically biased data. Scientists have seen that the answers to the patients’ diagnoses varied depending on the race or gender of the patients, as reported by a team in the US. Preprint at medRxiv https://doi.org/ktdz; 2023) — probably reflecting the text that the chatbot was trained on.

According to Degen, large language models are inaccuracy and hollow but professional-sounding results, and that there is clearly misuse of them. There’s a problem where the border between misuse and good use is.

The clearest benefit, researchers thought, was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work. The academic community can demonstrate how to use these tools for good, even if there are a few malicious players.

Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.

Machine-Learning Models for Drug Discovery and Bioinformatics: Why Japan’s earth sciences are booming, but how does the future go?

The principles of LLMs can be usefully applied to build similar models in bioinformatics and cheminformatics, says Garrett Morris, a chemist at the University of Oxford, UK, who works on software for drug discovery, but it’s clear that the models must be extremely large. Only a small number of entities on the planet have the ability to train large models, which require a large number of compute units, to run for months, and pay the electricity bill. That constraint is limiting science’s ability to make these kinds of discoveries,” he says.

That can be difficult to do, according to one Japanese respondent who worked in earth sciences but didn’t want to be named. “As an editor, it’s very hard to find reviewers who are familiar both with machine-learning (ML) methods and with the science that ML is applied to,” he wrote.

Many researchers said that LLMs and Artificial Intelligence were here to stay. At the Beth Israel Deaconess Medical Center in Boston, Massachusetts, Yury Popov is a specialist in his field. “We have to focus now on how to make sure it brings more benefit than issues.”