newsweekshowcase.com

There is a wild claim at the heart of the Openai lawsuit

Wired: https://www.wired.com/story/wild-claim-at-the-heart-of-elon-musks-openai-lawsuit/

OpenAI, Musk, and the First Causes of Action in a For-Profit Corporation, Part I: A Contract for Open Artificial Intelligence

There’s no agreement there — maybe it is true that OpenAI’s byzantine corporate structure that involves a nonprofit owning a for-profit corporation subverts the ideals laid out in this document, but Musk cannot sue over that since it is not a contract.

A big part of the case is about a technical claim that OpenAI has developed machines that can outsmart humans and that is often referred to as artificial general intelligence.

Sadly, Musk is not that person, and his lawyers have figured out that letting the world’s richest man rack up billable hours filing nonsensical lawsuits is more lucrative than fitting the “facts” to the “law,” or whatever it is regular lawyers do.

Let’s just take the very first cause of action of the lawsuit, for example. It is a claim for breach of contract — a very, very simple claim that almost any first-year law student can evaluate, because step one is asking if there is a contract, and step two is figuring out what the contract says. To have a valid contract, you need an offer, acceptance, and an exchange of value — what lawyers are trained to call “consideration,” in an enduring effort to make simple concepts sound confusing and increase fees.

Most importantly, contracts need to be written down — proving that an unwritten contract exists, what its terms are, and if they are enforceable is extraordinarily difficult, and courts do not like doing it, especially for ultra-sophisticated parties with a long history of dealing.

There are many places where the Founding Agreement is documented, among them, OpenArtificiality, Inc.’s founding article of incorporation and numerous written communications between the two parties over the course of a long period.

The specific purpose of this corporation is to provide funding for research, development and distribution of technology related to artificial intelligence. The resulting technology will benefit the public, and the corporation will look into open source technology for the public benefit. The corporation is not set up to benefit a single person.

The Case of Musk and DeepMind: Discovery of a For-Profit Subsequence of Contracts with an Unknown Inclusive Source

I asked a few lawyer friends if any of it was a contract, and most of them made faces. This tracks with Musk’s increasingly fuzzy understanding of how contracts work; just yesterday a judge told lawyers for X that its breach of contract case against the Centers for Combating Digital Hate involved “one of the most vapid extensions of law I’ve ever heard.”

This entire complaint is more like a 1L exam question than a real lawsuit — to the extent that the second cause of action is something called “promissory estoppel,” a concept that sets the hearts of law professors aflame and which comes up in the real world approximately never. The important thing to know is that the richest person in the world is now trying to tell a court that he somehow detrimentally relied upon the promises of a nonprofit when he donated millions of dollars to it with no written contract. This is, at the very least, extremely funny.

This case will be very good for law schools across the country because OpenAI is almost certain to file another 1L motion to dismiss for failure to state a claim.

Tension can be created by the board overseeing the for-profit arm. Last year there was a display of the dueling sides when Altman was ousted and brought back to the company.

According to the suit, Altman approached Musk in 2015, out of shared concerns over the risks of artificial intelligence and specifically, DeepMind, a research lab owned by the tech giant.

The complaint argues that the company went wayward in recent years after decisions to create a for-profit subsidiary, give Microsoft an exclusive license to some of its technology, and keeping secret the internal design of ChatGPT’s latest version.

Musk has been openly part of the backlash. Last year, he told then-Fox News host Tucker Carlson that ChatGPT has a liberal bias, and he planned to provide an alternative.

Musk’s company offers a limited number of users in the U.S. the opportunity to try the prototype and provide feedback, though early access requires a paid subscription to another Musk company, X, formerly known as Twitter.

Openness and the Rise of Large Language Models in Deep Learning: The Case of GPT-4 and Meta’s Llama 2

Depending on how much of the inner workings are shared with researchers and members of the public, there are different levels of openness in the field of artificial intelligence. Those in favor of open source argue the approach allows greater transparency and potential for innovation. It makes powerful artificial intelligence models very accessible to criminals or adversaries, according to arguments against. GPT-4 is not free to download, modify, or deploy, while Meta’s Llama 2 model is.

“I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.

Exit mobile version