newsweekshowcase.com

If it can’t comply with future regulation Openai could cease operating in the EU

The Verge: https://www.theverge.com/2023/5/25/23737116/openai-ai-regulation-eu-ai-act-cease-operating

Open and Closed: Why It Is Important to Open and Why it isn’t: Ethical Considerations at Hugging Face

I have been researching and leading the releases of generative artificial intelligence systems for a long time, and I also focus on ethical considerations at Hugging Face. Doing this work, I’ve come to think of open source and closed source as the two ends of a gradient of options for releasing generative AI systems, rather than a simple either/or question.

In the middle of the gradient are the systems casual users are most familiar with. Both ChatGPT and Midjourney, for instance, are publicly accessible hosted systems where the developer organization, OpenAI and Midjourney respectively, shares the model through a platform so the public can prompt and generate outputs. With their broad reach and a no-code interface, these systems have proved both useful and risky. While they can allow for more feedback than a closed system, because people outside the host organization can interact with the model, those outsiders have limited information and cannot robustly research the system by, for example, evaluating the training data or the model itself.

Systems that are so closed it’s not known to the public are at the extreme end of the gradient. For obvious reasons, it is difficult to cite any concrete examples of these. Closed systems, which have been announced publicly before, are becoming increasingly common for new things, such as video generation. Video generation is relatively new, so there is not a lot of research or information on what to do about the risks. When Meta announced its Make-a-Video model in September 2022, it cited concerns like the ease with which anyone could make realistic, misleading content as reasons for not sharing the model. Instead, Meta stated that it will gradually allow access to researchers.

Many people think of this as a binary question: Systems can either be open source or closed source. Big Science believes that open development will lead to more collective work on artificial intelligence systems to make sure they reflect their needs and values. Openness allows more people to contribute to research and development, but that can lead to harm and misuse from malicious actors. Closed-source systems, like Google’s original LaMDA release, are protected from actors outside the developer organization but cannot be audited or evaluated by external researchers.

The recent comments from Altman help fill out a different picture of what the company wants. Altman has told US politicians that regulation should mostly apply to future, more powerful AI systems. The EU AI Act is focused on the current capabilities of the software.

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. Companies who reveal their data sources are left open to legal challenges. An example of this is OpenAI rival Stability AI, which is currently being sued by a stock image maker for using copyrighted data.

OpenAI used to share this sort of information but has stopped as its tools have become increasingly commercially valuable. In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

In comments reported by Time, Altman said the concern was that the system would be designated as high risk under EU legislation. OpenAI would have to meet certain safety and transparency requirements. “Either we’ll be able to solve those requirements or not,” said Altman. Technical limits to what is possible are here.

OpenAI is not a Model for Artificial Intelligence: It is not the Will of the EU, it is the Art. Is that the Future?

According to a Financial Times report, Altman said the details really mattered. “We will try to comply, but if we can’t comply we will cease operating.”

OpenAI CEO Sam Altman has warned that the company might pull its services from the European market in response to AI regulation being developed by the EU.

Exit mobile version