A tech briefing: Artificial intelligence will help map methane pollution from space


The Environmental Impact of Artificial Intelligence in the 21st Century: Does the AI Industry Really Need Water? The Case of the West Des Moines AI Cluster

It is more than just energy. Artificial intelligence systems need a lot of water to cool and generate electricity. The GPT-4 is served by a giant data-centre cluster in West Des Moines. In July, the month before the OpenAI model was finished training, the cluster used 4% of the district’s water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use — increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. The social media giant called the effects of the industry’s pursuit of scale the “elephant in the room“.

But at last, legislators are taking notice. On 1 February, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to help establish standards for assessing the environmental impact of artificial intelligence and create a voluntary reporting framework for operators. Whether the legislation will pass remains uncertain.

So what energy breakthrough is Altman banking on? Nuclear fusion is not the design or deployment of more sustainable artificial intelligence systems. In 2021, he started investing in Helion Energy, a fusion company.

There is no reason this can’t be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAI’s GPT-3 with a much lower carbon footprint. But that’s not what’s happening in the industry at large.

Source: Generative AI’s environmental costs are soaring — and mostly secret

Artificial Intelligence and Environment: How AI can help in understanding fossil fuel, water, energy and the environmental impact of the EU for sustainable development and future research

Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. The urgent situation requires that more needs to be done.

Neural network architectures could be modified to help with the sustainable use of resources.

Legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is not the end of the world, but it is a start.

A soon-to-be-launched satellite will help create the most complete global methane map. OpenAI has a video generator, Sora, which could potentially open the deepfake floodgates and what the EU’s toughai law means for research.

A pattern-learning AI system can tell Eurasian beavers (Castor fiber) apart by the pattern of scales on their tails. After training on a bunch of photos from previous hunts, the program was able to identify some people with 98% accuracy. It could make it easier and faster to track the populations of the mammals by using the method.

AI-Generated Fakery Detection for Robotics, Robotic Games and Robotic Arms (Extended Abstract)

Sora is a system that can be used to make realistic videos from text. If this technology is combined with voice cloning it could open up an entirely new front when it comes to creating deep fakes of people. Sora has made a lot of mistakes that make it possible for it to detect its output. In the long run “we will need to find other ways to adapt as a society”, says computer scientist Arvind Narayanan. This could, for example, include implementing watermarks for AI-generated videos.

For robots, tidying a room is a complex task — this graphic shows some of the many things that can go wrong when a system called OK-Robot is instructed to move objects around a room it has never before encountered. A retractable arm with a gripper is part of OK-Robot, a wheeled base and tall pole which are all run off-the-shelf Artificial Intelligence models. In less cluttered rooms, it successfully completed 40% of the time when given tasks such as move the soda can to the box. (MIT Technology Review | 4 min read)

The European Union’s new AI Act will put its toughest rules on the riskiest algorithms, but will exempt models developed purely for research. I would be amazed if this is going to be a problem, because they really do not want to cut off innovation. Some scientists suggest that the act could bolster open-source models, while others worry that it could hinder small companies that drive research. Powerful general-purpose models, like the one behind ChatGPT, will be subjected to separate and stringent checks. Critics don’t believe that regulating models on their capabilities, rather than use, has a scientific basis. “Smarter and more capable does not mean more harm,” says AI researcher Jenia Jitsev.

Scientific publishers have started to use AI tools such as ImageTwin, ImaCheck and Proofig to help them to detect questionable images. The tools make it faster and easier to find rotated, stretched, cropped, spliced or duplicated images. They are not as proficient at spotting complex artificial intelligence (artificial intelligence) created fakery. “The existing tools are at best showing the tip of an iceberg that may grow dramatically, and current approaches will soon be largely obsolete,” says Bernd Pulverer, chief editor of EMBO Reports.

An AI system built by the start-up Quantiphi tests drugs by analysing their impact on donated human cells. According to the company’s co-founder Asif Hasan, the method can achieve almost the same results as animal tests while reducing the time and cost of pre-clinical drug development by 45%. Another start-up, VeriSIM Life creates simulations that predict how a drug will react in the body. Decreasing the pharmaceutical industry’s reliance on animal experiments could help alleviate ethical concerns as well as risks to humans. “Animals do not resemble human physiology,” says Quantiphi scientist Rahul Ramchandra Ganar. “And hence, when you go for clinical trials, there are many new adverse events or toxicology that get discovered.”