Corporate lobbying is taking over artificial intelligence laws in the US


Is there enough talent to invest in responsible AI research? Evidence from the U.S. Industrial Research Imbalance and Impact on Society

“Industry hiring is much higher than the overall growth of computer science research faculty,” says Ahmed. In 2004, just 21% of PhDs in Artificial Intelligence from North American universities went to work in industry. This growing imbalance worries some in academia. The reason for the concern is that companies are more focused on profits and less on research, which has an effect on the kinds of products that they seek to develop. If developments of major consequence for society are driven mainly by short-term commercial interests, we have a problem.

Many scholars believe that the incentive structure in place in the industry makes firms under invest in research into ethical and social consequences of Artificial intelligence, even though it is important. An analysis done by other colleagues confirms the suspicion. Leading AI firms have significantly lower output for responsible-AI research compared with conventional AI papers. The research they do is more limited in scope and lacks diversity.

Companies that develop and deploy AI responsibly could face a lighter tax burden, she suggests. “Those that don’t want to adopt responsible-AI standards should pay to compensate the public who they endanger and whose livelihoods are being sacrificed,” says Vallor.

For that scrutiny to happen, however, it is imperative that academics have open access to the technology and code that underpins commercial AI models. “Nobody, not even the best experts, can just look at a complex neural network and figure out exactly how it works,” says Hoos. It’s essential that we know the capabilities and limitations of these systems so we don’t make them worse.

Theis says many companies are making moves towards open access for their AI models, because they want more people to be able to work with them. “It’s a core interest for industry to have people trained on their tools,” he says. Meta, the parent company of Facebook, for example, has been pushing for more open models because it wants to better compete with the likes of OpenAI and Google. Giving people access to its models will allow an inflow of new, creative ideas, says Daniel Acuña, a computer scientist at the University of Colorado Boulder.

It is unrealistic to think that companies will give away their secret sauce, as another reason that it’s important that academia keeps up with industry developments.

The majority of the work is not published in leading journals. In 2023, research by corporations accounted for only 3.84% of the United States’ total Nature Index output in AI. But data from other sources show the increasingly influential role that companies play in research. In research articles with one or more industry co-author grew from 22% in 2000 to 32% in 2020, according to a study published in Science1 last year. Industry’s share of the biggest, and therefore most capable, AI models went from 11% in 2010 to 96% in 2021. And on a set of 20 benchmarks used to evaluate the performance of AI models — such as their capabilities in image recognition, sentiment analysis and machine translation — industry alone, or in collaboration with universities, had the leading model 62% of the time before 2017, a share that has grown to 91% since 2020. “Industry is increasingly dominating the field,” says Ahmed.

Academics will need support to make the most of their freedom, but funding is the most important thing. Theis says an investment into basic research more broadly would be helpful.

Europe is also home to several initiatives to boost academic research in AI. Theis is scientific director of Helmholtz AI, an initiative run by the Helmholtz Association of German Research Centres. The unit provides funding, computing access and consulting for research labs to help them apply AI tools to their work, such as finding new ways of using the large data sets they produce in areas such as drug discovery and climate modelling. Theis wants to democratizing access to it to enable researchers. To accelerate the research labs.

An even more ambitious plan was put forth by CLAIRE, the confederation of laboratories for artificial intelligence research in Europe. The plan is inspired by the approach in physical sciences of sharing large, expensive facilities across institutions and even countries. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”

Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. It’s going to be hard for academicians to keep up with the advancement in large language models for natural-language processing, says Fabian Theis, a computational biologist at Helmholtz Munich in Germany.

The US State Laws Aren’t All You Need: Risk-Based Artificial Intelligence in the Era of a Unification era

It’s not clear if it makes sense for US firms to comply with EU rules in order to do business, but if it does it will mean that the United States will be left less regulated than before. Although Brussels faced its fair share of lobbying and compromises, the core of the AI Act remained intact. We will see if US state laws stay the course.

One explanation for the lack of a big-tech effect on state laws is that there is more advanced discussion on how to protect personal data in the US. This includes a policy roadmap from the Senate, and active input from industry players and lobbyists. The hesitancy was embodied by the governor. In the absence of a unified federal law, the states are afraid of losing tech jobs to states with weaker regulations and less data-protection legislation.

In contrast, the state bills are narrower. The Colorado legislation directly drew on the Connecticut bill, and both include a risk-based framework, but of a more limited scope than the AI Act. In similar areas, the framework covers education, employment and government services, but only systems that make certainquential decisions impacting consumer access to those services are considered high risk. The Connecticut bill would ban the dissemination of political deepfakes but not their creation. Additionally, definitions of AI vary between the US bills and the AI Act.

A major difference between the state bills and the AI Act is their scope. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. High-risk applications, such as those used in law enforcement, are subject to more stringent requirements, while lower-risk systems have fewer or no obligations.

In March, the United States introduced a resolution to the UN calling on member states to embrace the development of “safe, secure, and trustworthy AI.” In July, China put forward a resolution that called for cooperation in the development of Artificial Intelligence and made it widely available. All UN member states signed both agreements.

The remarkable abilities demonstrated by large language models and chatbots in recent years have sparked hopes of a revolution in economic productivity but have also prompted some experts to warn that AI may be developing too rapidly and could soon become difficult to control. Not long after ChatGPT appeared, many scientists and entrepreneurs signed a letter calling for a six-month pause on the technology’s development so that the risks could be assessed.

There are immediate concerns relating to the potential for Artificial Intelligence to automate misinformation, generate deepfake video and audio, replace workers en masse, and cause societal bias on an industrial scale. “There is a sense of urgency, and people feel we need to work together,” Nelson says.

“AI is part of US-China competition, so there is only so much that they are going to agree on,” says Joshua Meltzer, an expert at the Brookings Institute, a Washington, DC, think tank. Key differences, he says, include what norms and values should be embodied by AI and protections around privacy and personal data.