newsweekshowcase.com

The artificial intelligence system was a good thing and a bad thing

Nature: https://www.nature.com/articles/d41586-023-04014-1

A-Lab and Nature’s 10: Artificial Intelligence Meets Human-Robust Science, with an Application to Wine Traces Chemical Profiles

Last month, scientists demonstrated the A-lab, an autonomous chemistry system that can synthesize new materials without human intervention. Critics have questioned its ability to analyse results, as well as whether it actually made 41 new compounds. The chemist Andy Cooper says that this doesn’t invalidate the A-Lab idea, but it shows that there is work still to be done. Two critics’ teams are now collaborating to check the paper’s claims in more detail.

The sole reason for the dialogues to continue is to keep up the style of its training data. But in doing so, it and other generative artificial-intelligence (AI) programs are changing how scientists work (see go.nature.com/413hjnp). They have brought back debate about the nature of human intelligence, how best to regulate the interaction between the two, and the limits of Artificial Intelligence. That’s why this year’s Nature’s 10 has a non-human addition.

The technology is dangerous. Automated agents can assist cheat and plagiarists but should not be left unattended. Some scientists have admitted that they used the internet to generate articles without declaring they were made with Artificial Intelligence.

The European Union has agreed on one of the first comprehensive attempts to regulate AI use. Under the plan, AI applications would be categorized on the basis of their potential to harm people and society. Some applications, like creating a facial-recognition database using images from the Internet, would be banned. Legislation won’t be in effect until 25 years from now, a long time for artificial intelligence development.

Wine can be traced by a machine-learning tool which analyses the drink’s complex chemical profile. The wine from the Bordeaux region in France can have different levels of chemical compounds, and the formula that was used to train it could spot the estate on which each bottle was produced with 99% accuracy. The approach could help the wine industry to authenticate products. “There’s a lot of wine fraud around with people making up some crap in their garage, printing off labels, and selling it for thousands of dollars,” says neuroscientist and study co-author Alexandre Pouget.

The American Dream: Artificial Intelligence and Robotics in the 21st Century, Part II: The Case for a Female Computer Scientist

It was Sharon Goldman who thought that millions of eye rolls happened at once, because The New York Times failed to include any women in its list of artificial intelligence pioneers. The creator of a huge image data set that made advances in computer vision isn’t on the list. Goldman writes that this is just a glaring symptom of a bigger problem. “Aren’t we all tired of it?”

Cockroaches controlled by on-board computers could search for earthquake survivors; sensor-carrying jellyfish could monitor the ocean for the effects of climate change. Engineers can harness organisms’ natural capabilities with bio hybridrobots, for example. And animals can usually stay on the move for much longer than battery-powered machines. Most bio hybrids are handmade and the biggest issue is scaling up production. Kit Parker is a biomedical engineer and says arts and crafts is not engineering. “You’ve got to have design tools. These are just tricks that people have used before.

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with financial support from FII Institute.

Computer scientist Joy Buolamwini recalls her first encounter with algorithmic bias as a graduate student: the facial recognition software she was using for a class project failed to detect her dark skin, but worked when she put on a white Halloween mask. Listen for 37 minutes.

The Impact of LLMs and Nature’s 10 on Science and Research Practices: What we’ve Learned in the Last 10 Years

Nature’s 10 is a list of key developments in science and the people who contributed to them.

It co-wrote scientific papers — sometimes surreptitiously. It drafted outlines for presentations, grant proposals and classes, churned out computer code, and served as a sounding board for research ideas. It also invented references, made up facts and regurgitated hate speech. Most of all, it captured people’s imaginations: by turns obedient, engaging, entertaining, even terrifying, ChatGPT took on whatever role its interlocutors desired — and some they didn’t.

There is a computer program in a list of people who have shaped science. ChatGPT is not a person. This program has had a profound effect on science in the past year, in many ways.

Some researchers use these apps to help summarize or write manuscripts, polish applications and write code. ChatGPT and related software can help to brainstorm ideas, enhance scientific search engines and identify research gaps in the literature, says Marinka Zitnik, who works on AI for medical research at Harvard Medical School in Boston, Massachusetts. Model training on scientific data might help to build models that can help guide research, maybe by creating new molecules or simulations of cell behavior.

Errors and biases are baked into how the generative artificial intelligence works. LLMs build up a model of the world by mapping language’s interconnections, and then spit back plausible samplings of this distribution with no concept of evaluating truth or falsehood. This leads to the programs making up histories and information, which can be unreliable, including non-existent scientific references.

Emily Bender thinks synthetic text-extruding machines have many wrong ways to be used. ChatGPT has a large environmental impact, problematic biases and can mislead its users into thinking that its output comes from a person, she says. On top of that, OpenAI is being sued for stealing data and has been accused of exploitative labour practices (by hiring freelancers at low wages).

The size and complexity of LLMs means that they are intrinsically ‘black boxes’, but understanding why they produce what they do is harder when their code and training materials aren’t public, as in ChatGPT’s case. The LLM movement is growing, but it is not as powerful as the large proprietary programs.

Some countries are developing national AI-research resources to enable scientists outside large companies to build and study big generative AIs (see Nature 623, 229–230; 2023). But it remains unclear how far regulation will compel LLM developers to disclose proprietary information or build in safety features.

Exit mobile version