Open Access for Artificial Intelligence Models: A Taxonomy Problem for the Future of Human Resources Research in the 21st Century
It’s possible to get a job at a company if you have the skills to work on the cutting edge of artificial intelligence, and many companies have been able to fill vacancies with graduates from universities.
Companies had invested more heavily in responsible-AI research in the past, says Vallor, but that interest waned with the boom in generative AI, prompting a “race to the bottom” to capitalize on the market. “The knowledge about responsible AI is all there, the problem is that large AI companies don’t have incentives to apply it,” she says. We could change the incentives.
She suggests that companies that develop and deploy artificial intelligence can face a lighter tax burden. “Those that don’t want to adopt responsible-AI standards should pay to compensate the public who they endanger and whose livelihoods are being sacrificed,” says Vallor.
It is crucial that academic have open access to the technology and code that underpins the commercial models of artificial intelligence. The best experts can’t just look at a neural network and figure out how it works. “We know very little about the capabilities and limitations of these systems, so it is absolutely essential that we know as much as possible about how they are created.”
Many companies are moving towards open access for their artificial intelligence models because they would like more people to work with them. He says the industry is interested in having people trained on their tools. Meta, the parent company of Facebook, for example, has been pushing for more open models because it wants to better compete with the likes of OpenAI and Google. A computer scientist says giving people access to the models will allow new ideas to come in.
It is not realistic to think that companies will be willing to give away all of their secret sauce.
The share of corporate research output in the United States more than doubled in the past year in the Nature Index journals. But it still continues to represent a tiny proportion of total AI Share — just 3.8% in the United States last year — suggesting companies are either publishing the bulk of their research elsewhere or are keeping it under wraps. South Africa is the only African country with an Artificial Intelligence output in the top 40 nations. Although Nature Index journals represent a fraction of AI research, finding ways to redress these imbalances is essential to ensure that this revolution benefits everyone.
Academics need funding to make the most of that freedom. Theis says that a strong investment into basic research in the wider world would be useful.
Recruitment programmes such as the Canada Excellence Research Chairs initiative, which offers up to Can$8 million over eight years to entice top researchers in various fields to move to, or remain in, Canada, and Germany’s Alexander von Humboldt Professorships in AI, worth €5 million over five years, have both helped to shore up AI research in the countries. Hoos himself holds one of the Humboldt professorships.
The Confederation of Laboratories for Artificial Intelligence Research in Europe, co-founded by Hoos, put forward an even bigger plan. Sharing large, expensive facilities across institutions and even countries is what inspired the plan. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”
Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. “When it comes to training state-of-the-art large language models for natural-language processing, academia is going to be hard-pressed to keep up,” says Fabian Theis, a computational biologist at Helmholtz Munich, in Germany.
Policy and Innovation in Canada: The Impact of Climate Change on Mining, Roads and Bridges – a Science-Policy Advisors Shake Programme that Solve Real-World Problems
A few months in, my research brought me in close proximity to oyster boat crews doing the harvesting. I wanted to understand what was going on in the fishing community.
I support the environmental and climate priorities of the president. One area I work in is infrastructure policy. Working with the US Congress, the administration is investing billions of dollars in infrastructure across the nation, a watershed moment for climate action. We want to ensure that these future investments in communities are themselves resilient to the impacts of climate change. So, if communities are going to build a new road or repair a bridge, we share the science of climate change with the builders and planners so they can take that information into consideration.
We are also working with the government to make nature-based solutions. I know the science behind that. If a marsh is restored, it will have a buffer against storm surge or sea-level rise. Such initiatives both strengthen nature and provide protective benefits for people.
I became interested in policy when I was pursuing my master’s degree in astronomy and astrophysics at the University of Victoria, Canada. A good friend encouraged me to be vice-president of the graduate student association, through which I eventually helped to negotiate dental coverage for members. I realized during the negotiations that a lot needed to be done. So many problems needed solving through policy change. I thought that I really enjoy it.
I work for the National Sciences and Engineering Research Council, which distributes government funds to university researchers throughout Canada. My job is to collect data to see whether council policies are working.
The chief data officer’s main job is to make sure all the data related infrastructure for the NSCR is in order, so that data is a key driver of public service.
Canada is investing in electric vehicles, their batteries, and recycling them. A person in parliament would say that electric vehicles are the future. We need to look through reports to find the associated projects. It is time consuming. We want to incorporate machine learning into our processes so that you don’t have to read a lot of reports if you want to. With machine learning, you can get your hands on a larger data set. And using those data, we can help officials to craft policy more efficiently.
Source: Science-policy advisers shape programmes that solve real-world problems
The diversity of applicants for grants and applications in science-policy advisers shape programmes that solve real-world problems: an analysis of NSERC data
The area we are working on is equality, diversity and inclusion. Making EDI policies more effective is important to me because I’m mixed race (my father is Black Jamaican and my mother is white Canadian) and neurodiverse — I have attention deficit hyperactivity disorder, am on the autism spectrum and have mental-health challenges. In my science education, the things worked against me. I am fortunate to have made it to where I am.
To better understand EDI, our office put forward the idea of collecting data on the people who apply for grants to learn more about them. Data was collected here and there, but in different ways. I and an NSERC colleague pulled those data together and analysed them.
We showed not only aspects in which the council is doing really well, but also places where it had considerable gaps — NSERC needs to focus on these areas. The diversity of applicants for funding is not representative of the Canadian population as a whole. The data we used showed how many applications were needed from certain demographic groups to be represented in an equitable way. If the student pool was representative of the population, we would expect 50 Black people to apply for a student fellowship, but instead we got 30 applications.
For example, the NSERC can say to a university, “We see that you didn’t report having any Black applicants to your PhD programmes last year. Were you okay with that? Or, “Why might that be?” We can see the data, so we know about the gaps.
Source: Science-policy advisers shape programmes that solve real-world problems
What will it take to reach net-zero CO2 emissions? A case study of India’s Ministry of Environment, Forest and Climate Change
For instance, India’s Ministry of Environment, Forest and Climate Change reached out to academics and think tanks involved in modelling. Our group came up with scenarios discussing what it would take for the country to reach net-zero carbon dioxide emissions by 2060, 2070, 2080 and 2100. The models gave ministers potential paths that the country could take to reach net zero, and these informed India’s decision to commit to doing so in the Glasgow Climate Pact at COP26, the 2021 UN climate change conference.
It’s difficult to get an audience with policymakers. We really need to follow up several times. It’s not like I say something They lap it up. That never happens. It takes many conversations.
To know what has the biggest impact, you have to speak their language. If you’re talking to a legislative member who works directly with people, then it could be social impacts. By contrast, for policymakers at the highest levels of government, it could be economic impacts and investments. These are the big tickets that resonate well. They want to make a difference at the end of the day.
Source: Science-policy advisers shape programmes that solve real-world problems
IUPAC: Using Artificial Intelligence to promote the Discovery of New Chemical Substances in Forest Ecologies in South Africa and the United States
I was a forest ecologist at the Indian Institute of Science in Bengaluru for 25 years. I did long-term monitoring in a biodiversity hotspot, India’s Western Ghats mountain range.
I was assuming I’d be a tenure-track faculty member at the university. As a postdoc at Florida State University’s Coastal and Marine Laboratory in St Teresa, I was doing research that I thought would be useful to the community. I was investigating local drought conditions, which were leading to declines in oyster populations.
Science policies are important and rewarding, according to the four science policy specialists. But there are challenges: getting on the radar of policymakers, for one. Building trust with government officials at the early stages of policymaking is one way to do that.
I run a research group and volunteer at several organizations that help global leaders create more effective public policies.
Our reports have identified emerging technologies that will shape the future of science. Some of the predictions have come true. For example, our 2015 report identified the gene-editing tool CRISPR–Cas9 as a transformative technology. The scientists who discovered CRISPR won a Nobel prize five years later.
The report highlighted the vaccines that were based on genes. The COVID-19 vaccine was coming at the time and we did not know about it. We just thought that people weren’t paying enough attention to this technology, and that governments should put more resources into developing it.
The use of Artificial intelligence in research is one area in which science policy can help. We need an innovative chemistry that can be read by machines. At IUPAC, we are creating a textual identifier for chemical substances that will provide a standard way of encoding molecular information. This will accelerate the use of Artificial Intelligence in discovery.
Systems thinking is one of the approaches IUPAC uses to revise science education. This means teaching science in all contexts, connecting scattered pieces of information and helping students to build a deeper understanding of the subject.
We’re working with governments worldwide to provide guidelines, teaching tools and training workshops. In the past five years, we’ve run workshops in South Africa, the United States and Egypt, training secondary-school teachers to bring this approach to their classrooms.
This requires producing high-quality reports that stand up to the most rigorous scrutiny. But beyond the report, you need to build trust through active listening and empathy. The day you are asked to advise on public policy, it starts and continues until the day the legislation is implemented.
Providing the best knowledge in context is what scientific advice is about. Just as decision makers work with lawyers to ensure their decisions are constitutionally sound, they need to work closely with scientific advisers to incorporate the latest knowledge into policies.
The State of the Union and the Future of the Artificial Intelligence (AI) Act: Towards a Fair and Fair Definition of Laws in the Tech Industry
There is not much evidence that states are following the EU in drafting their own legislation. Tech industry lobbyists are pushing for less stringent legislation in order to maximize compliance costs but that is not protective of individuals, as shown by the evidence. Two bills enacted in Colorado and Utah and two drafts in Oklahoma and Connecticut show this.
Lobbying groups claim to prefer national, unifiedAI regulation over state-by-state fragmentation, a line that has been parroted by tech companies in public. But in private, some advocate for light-touch, voluntary rules all round, showing their dislike of both state and national AI legislation. If neither kind of regulation emerges, AI companies will have preserved the status quo: a bet that two divergent regulatory environments in the EU and United States — with a light-touch regime in the latter — favour them more than the benefits of a harmonized, yet heavily regulated, system.
The tech industry can have more of a power beyond this kind of passive inspiration. The section on generative artificial intelligence was put into the Connecticut draft bill but it was removed because of lobbying from industry. And although the bill then received support from some big tech companies, it is still in limbo. The bill would stifle innovation, and the governor of Connecticut threatened to veto it. Its progress is frozen, as are many of the other more comprehensive AI bills being considered by various states. The Colorado bill is expected to be altered in an effort to not hamper innovation.
A major difference between the state bills and the AI Act is their scope. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. High-risk AI applications, such as those used in law enforcement, are subject to the most stringent requirements, and lower-risk systems have fewer or no obligations.