How risky are general-purpose AI systems? A press release on the AI Act of 19 July 2014, New York, USA, and other European companies
The idea is to mitigate the dangers of certain AI functions based on their level of risk. Lawmakers want to expand the model to include basic models like those used with general purpose AI services, like Google’s Bard.
According to the press release, negotiators established obligations for “high-impact” general-purpose AI (GPAI) systems that meet certain benchmarks, like risk assessments, adversarial testing, incident reports, and more. It also mandates transparency by those systems that include creating technical documents and “detailed summaries about the content used for training” — something companies like ChatGPT maker OpenAI have refused to do so far.
Rights groups said they were concerned about the exemptions and other big loopholes in the AI Act, including lack of protection for AI systems used in migration and border control, and the option for developers to opt-out of having their systems classified as high risk.
The press release didn’t go into detail about how all that would work or what the benchmarks are, but it did note a framework for fines if companies break the rules. The size of the company and the number of employees can vary from 35 million euros or 7 percent of global revenue to 7.5 million euro or 1.5 percent of global revenue.
The European Parliament’s First Meeting on Bioethics and Artificial Intelligence (EIA) Regulations after the Open AI Decree
There are a number of applications where the use of AI is banned, like scraping facial images from CCTV footage, categorization based on “sensitive characteristics” like race, sexual orientation, religion, or political beliefs, emotion recognition at work or school, or the creation of “social scoring” systems. The last two banned bullet points are AI systems that “manipulate human behavior to circumvent their free will” or “exploit the vulnerabilities of people.” The rules also include a list of safeguards and exemptions for law enforcement use of biometric systems, either in real-time or to search for evidence in recordings.
“It’s very very good,” he said by text message after being asked if it included everything he wanted. “Obviously we had to accept some compromises but overall very good.” The law will not take effect until the year 2025, threatening stiff penalties for violations of up to 35 million euro or 3% of a company’s global turnover.
More negotiations will be needed, including votes by the Civil Liberties committees, after the agreement has been reached.
Negotiations over rules regulating live bioethics, such as facial recognition, and foundation Artificial Intelligence models like Openai’s ChatGPL have been divisive. These were reportedly still being debated this week ahead of Friday’s announcement, causing the press conference announcing the agreement to be delayed.
After months of debate about how to regulate companies like OpenAI, lawmakers from the EU’s three branches of government—the Parliament, Council, and Commission—spent more than 36 hours in total thrashing out the new legislation between Wednesday afternoon and Friday evening. EU parliament election campaign starts in the new year, so lawmakers were under pressure to strike a deal.
Companies that do not comply can be fined as much as 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years.
Measures designed to make it easier to protect copyright holders from generative AI and require general purpose AI systems to be more transparent about their energy use were also included.
Commission reaches a deal on the world’s first comprehensive AI rules: The role of the EU in educating and regulating generative AI systems
Europe has positioned itself as a pioneer in understanding the importance of it’s position as a global standard setter, said the European Commissioner at a Friday night press conference.
“Today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing,” said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.
The European Parliament will still need to vote on the act early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.
Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
Strong and comprehensive rules from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it.”
She said that the EU’s rules will probably extend some of those obligations outside of the continent. It’s not efficient to train different models for different markets.
Foundation models are one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals, including OpenAI’s backer Microsoft.
Privacy and Security Issues in Biometric Identification Systems for the Study of Young People Sexually Exploiting Children in Europe Using Generative AI
Large language models are trained on a huge amount of written works and images uploaded to the internet. generativeai systems give them the ability to create something new, whereas traditionalai uses preset rules to complete tasks
The companies building the foundation models will be required to draw up technical documentation and provide details of their training content. The most advanced foundation models will face extra scrutiny in light of their “Systemic risks”, which will include assessing and offsetting those risks.
Data used to train the models is not transparent and this could pose a risk to daily life as it acts as the basic structure for software developers.
European lawmakers wanted a full ban on public use of face scanning and other “remote biometric identification” systems because of privacy concerns. But governments of member countries succeeded in negotiating exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.
“Despite all the victories that may have been, the fact is that the final text still contains huge flaws,” said Daniel Leufer, senior policy analyst at Access Now.