Artificial Intelligence Safety: An Executive Order or a “Path of Parliament”? What do governments and regulators need to do about it?
President Joe Biden does not mention his executive order on artificial intelligence safety in his order. The government and the developers of the most powerful artificial intelligence systems have a responsibility to share safety data. The US National Institute of Standards and Technology will set safety standards.
Sunak has said that the United Kingdom is not in a rush to regulate. “This is a point of principle — we believe in innovation,” he said. But innovation and regulation shouldn’t be opposition. The world needs both.
There is a lot of literature on regulation, from banking to medicines, and governments can draw on it. The International Atomic Energy Agency, which was established in 1962, is one example of civil nuclear technology that can be used for artificial intelligence. https://doi.org/k3h2; 2023).
Every technology is different and there are some fundamental principles that have emerged over the decades. There is a need for transparency and regulators have access to complete data to make good decisions. Another is the need for legally binding standards for monitoring, compliance and liability.
The financial crisis that happened in 2008 shows what can happen if regulators don’t pay close attention to data. Banks and insurance companies did not know that they were dependent on risky credit until it was too late, and they didn’t detect it until after it was too late. Relatively few people understood how these products had been created or what their systemic risks were, as Andrew Haldane and Robert May described in the aftermath (A. G. Haldane & R. M. May Nature 469, 351–355; 2011).
Other mainstays of regulation include registration, regular monitoring, reporting of incidents that could cause harm, and continuing education, for both users and regulators. There are lessons in road safety here. The car has transformed the lives of billions, but also causes harm. Vehicles have to be tested regularly and manufacturers have to conform with product safety standards in order to mitigate risks. Regulation can encourage innovation. The development of cleaner cars was influenced by emissions standards.
The idea that companies should not be marking their own homework is in fact right for a technology that can reinforce bias and discrimination. Governments want to attract flagship companies — Elon Musk, the owner of X (formerly Twitter), is reported to be attending the summit. When it comes to the safety of Artificial Intelligence, governments are reluctant to use the word regulation.
The White House was a hive of activity on Monday with toys for the kids. The costumed children celebrating Halloween with President Biden weren’t there for the unveiling of a sweeping new executive order on artificial intelligence. Yet as the US government digests its lengthy, new to-do list and Vice President Kamala Harris heads to a UK summit on AI to sell the president’s vision, leaders in Congress and nations around the world may be asking themselves, trick or treat?
Biden wants to gain control of private projects that use artificial intelligence. The Defense Production Act, written during the war to allow government control of industries, will be used by him to force private US technology firms to report information about their most secretive artificial intelligence projects.
Biden said the executive order will use the authority given to it to make companies prove that their most powerful systems are safe before allowing them to be used.
The UK summit on AI safety: Far-off risks from the prime minister’s announcement of the UK Artificial Intelligence Security Summit (AISIG)
Vice President Harris was at his side for the announcement but is taking his AI vision on the road for the rest of the week. The UK summit on artificial intelligence safety will be hosted by the prime minister and will focus on far-off risks.