newsweekshowcase.com

Sam Altman hopes that the tour will be able to deter the evil threat of Artificial Intelligence

Wired: https://www.wired.com/story/sam-altman-world-tour-ai-doomers/

Protesting Superintelligence with Artificial Intelligence and Guardrails: A Talk to OpenAI CEO Steve Stewart, CEO of OpenAI, and a Protestor

Stewart said he left that conversation slightly more worried than before. “I don’t know what information he has that makes him think that will work.”

Does Altman have a chance of getting this group to agree? Stewart says the OpenAI CEO came out to talk to the protestors after his time onstage but wasn’t able to change Stewart’s mind. He says they chatted for a minute or so about OpenAI’s approach to safety, which involves simultaneously developing the capabilities of AI systems along with guardrails.

We have published a lot of ideas on how alignment of superintelligent systems works and I believe that the problem is a technically solvable one. I feel more confident in that answer now than I did a few years ago. I hope that we avoid the paths that I don’t think are very good. But honestly, I’m pretty happy about the trajectory things are currently on.”

“My basic model of the world is that the cost of intelligence and the cost of energy are the two limited inputs, sort of the two limiting reagents of the world. And if you can make those dramatically cheaper, dramatically more accessible, that does more to help poor people than rich people, frankly,” he said. This technology will bring all of the world up.

However, he said that he was optimistic about the future. Very hopeful. He believes that current artificial intelligence tools will reduce inequality in the world and that there will be more jobs on the other side of the technological revolution.

According to OpenAI’s critics, this talk of regulating superintelligence, otherwise known as artificial general intelligence, or AGI, is a rhetorical feint — a way for Altman to pull attention away from the current harms of AI systems and keep lawmakers and the public distracted with sci-fi scenarios.

He thinks most people agree that there should be some global rules on how to build a superintelligence if someone cracks the code. I’d like to make sure we treat this at least as seriously as we treat, say, nuclear material; for the megascale systems that could give birth to superintelligence.”

“Look, maybe he’s selling a grift. I sure as hell hope he is,” one of the protestors, Gideon Futerman, a student at Oxford University studying solar geoengineering and existential risk, said of Altman. “But in that case, he’s hyping up systems with enough known harms. We probably should be putting a stop to them anyway. And if he’s right and he’s building systems which are generally intelligent, then the dangers are far, far, far bigger.”

He said that it would be difficult to mitigate the impacts when the company releases open-sourced models to the public, as it announced several weeks ago. The OpenAI techniques of what we can do on our systems will not work the same.

It’s important that artificial intelligence isn’t over-regulated while the technology is still evolving, according to Altman. The European Parliament is debating legislation that could affect how companies develop models, and could possibly lead to an artificial intelligence office to oversee compliance. The UK, however, has decided to spread responsibility for AI between different regulators, including those covering human rights, health and safety, and competition, instead of creating a dedicated oversight body.

Exit mobile version