OpenAI: A For Profit, Not a Non-For profit, OpenAI Thinks It Is Too Spoof to Be Shocked
Weeks after releasing a new model it claims can “reason,” OpenAI is barreling toward dropping its nonprofit status, some of its most senior employees are leaving, and CEO Sam Altman — who was once briefly ousted over apparent trust concerns — is solidifying his position as one of the most powerful people in tech.
The structure of OpenAI makes it at a moneygrubbing disadvantage. It made perfect sense that openai would restructure into a for profit company next year, because of what Altman told employees earlier this month. The company is considering becoming a public benefit corporation, similar to Anthropic, in order to raise funds, and investors plan to give them a 7 percent stake. (Altman almost immediately denied this in a staff meeting, calling it “ludicrous.”)
There’s already evidence OpenAI is focusing on fast launches over cautious ones: a source told The Washington Post in July that the company threw a launch party for GPT-4o “prior to knowing if it was safe to launch.” The Wall Street Journal reported on Friday that the safety staffers worked 20-hour days and didn’t have time to double-check their work. The GPT-4o was deployed despite the initial results being that it wasn’t safe to deploy.
OpenAI, a Non-For-Profit Organization Formed to Enhance Artificial Intelligence, and Protecting Safety in the 21st Century
Research labs work longer than companies. They can delay product releases when necessary, with less pressure to launch quickly and scale up. Perhaps most importantly, they can be more conservative about safety.
Since last year, Openai leadership has experienced an almost total turnover. Besides Altman himself, the last remaining member seen on a September 2023 Wired cover is president and cofounder Greg Brockman, who backed Altman during the coup. He has been on a personal leave of absence since August, and isn’t expected to return until next year. The same month he took leave, another cofounder and key leader, John Schulman, left to work for Anthropic.
OpenAI started as a nonprofit lab and later grew a for-profit subsidiary, OpenAI LP. The for-profit arm can raise funds to build artificial general intelligence (AGI), but the nonprofit’s mission is to ensure AGI benefits humanity.
Excess returns help the organization prioritize societal benefits over financial gain, with profits capped at 100x. And if the for-profit side strays from that mission, the nonprofit side can intervene.
A Realist Perspective on What We’ve Learned in the Past, and What We Don’t (Are We Here For Them)
I’m also wary of the supposed bonanza that will come when all of our big problems are solved. Let’s concede that we may actually be able to solve some of humanity’s biggest problems. We humans would have to actually implement those solutions, and that’s where we’ve failed time and again. A large language model is not sufficient to tell us that war is hell. and we shouldn’t kill each other. Wars keep happening.
Altman is a big fan of universal basic income, which he seems to think will cushion the blow of lost wages. Artificial intelligence may be able to bring about a plan but there isn’t any evidence that people who have lots of money are going to be comfortable with it. It seems that some kind souls of the Playa are up in arms about a proposal that could affect only people worth over $100 million. It’s a dubious premise that such people—or others who become super rich working at AI companies—will crack open their coffers to fund leisure time for the masses. One of the US’s major political parties can not stand Medicaid, which will cause populist demagogues to regard UBI in a totally different way.
Life will not be the same when jobs go the way of 18th century lamplighters. We did get a hint of his vision in a podcast this week that asked tech luminaries and celebrities to share their Spotify playlists. When explaining why he chose the tune “Underwater” by Rüfüs du Sol, Altman said it was a tribute to Burning Man, which he has attended several times. The festival, he says, “is part of what the post-AGI can look like, where people are just focused on doing stuff for each other, caring for each other and making incredible gifts to get each other.”
The march of technology has brought what used to be luxuries to the everyday people, including some that were unavailable to pharaohs and lords. Air-conditioning wasn’t something that Charlemagne enjoyed. Working-class people and even some on public assistance have dishwashers, TVs with giant screens, iPhones, and delivery services that bring pumpkin lattes and pet food to their doors. But Altman is not acknowledging the whole story. Despite massive wealth, not everyone is thriving, and many are homeless or severely impoverished. paradise isn’t evenly distributed, to quote William Gibson. That is not a reason that technology has failed. Since so many jobs will be automated, I think the same will be true if AGI arrives.
Source: No, Sam Altman, AI Won’t Solve All of Humanity’s Problems
Sam Altman is Not the Next-to-Leading Machine: Artificial Intelligence Is Just Predicting The Next Token
No matter what you think of Sam Altman, it’s indisputable that this is his truth: Artificial general intelligence–AI that matches and then exceeds human capabilities–is going to obliterate the problems plaguing humanity and usher in a golden age. I suggest we dub this deus ex machina concept The Strawberry Shortcut, in honor of the codename for OpenAI’s recent breakthrough in artificial reasoning. Like the shortcake, the premise looks appetizing but is less substantial in the eating.
He might have published this to challenge the idea that large language models are nothing more than an illusion. He says Nuh-uh. We’re getting this big AI bonus because “deep learning works,” as he said in an interview later in the week, mocking those who said that programs like OpenAI’s GPT4o were simply stupid engines delivering the next token in a queue. “Once it can start to prove unproven mathematical theorems, do we really still want to debate: ‘Oh, but it’s just predicting the next token?'” he said.