The content moderation dilemma is solved by GPT-4


AI Companies Must Prove Their AI Is Safe, says Nonprofit Group [AI Companies Must prove their AI is safe, says nonprofit group]

And as lawmakers continue to meet with AI companies, fueling fears of regulatory capture, Accountable Tech and its partners suggested several bright-line rules, or policies that are clearly defined and leave no room for subjectivity.

The group sent the framework to politicians and government agencies in the US this month, asking them to consider it while crafting new laws and regulations around Artificial Intelligence.

The Zero Trust AI Governance framework has three principles that must be fulfilled: enforce existing laws, create bold, easily implemented rules and put the burden on companies to prove that artificial intelligence isn’t harmful in every phase of the lifecycle. The definition of artificial intelligence includes both generative AI and the foundation models that enable it.

Jesse Lehrich says that they wanted to get the framework out now as technology is evolving quickly but new laws can’t move at that speed.

Lehrich pointed to the Federal Trade Commission’s investigation into OpenAI as an example of existing rules being used to discover potential consumer harm. Other government agencies have also warned AI companies that they will be closely monitoring the use of AI in their specific sectors.

Discrimination and bias in AI is something researchers have warned about for years. An article by Rolling Stone revealed how experts, such as Timnit Gebru, sounded the alarm for years only to be ignored by companies that employed them.

Source: AI companies must prove their AI is safe, says nonprofit group

The AI Companies Must Publish Their AI Is Safe, says Nonprofit Group [AI Companies Must prove their AI is safe, says nonprofit group]

“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. (Section 230 was passed in part precisely to shield online services from liability over defamatory content, but there’s little established precedent for whether platforms like ChatGPT can be held liable for generating false and damaging statements.)

These include prohibiting AI use for emotion recognition, predictive policing, facial recognition used for mass surveillance in public places, social scoring, and fully automated hiring, firing, and HR management. They also ask to ban collecting or processing unnecessary amounts of sensitive data for a given service, collecting biometric data in fields like education and hiring, and “surveillance advertising.”

Accountable Tech urged lawmakers to prevent large cloud providers from owning or have a beneficial interest in large commercial artificial intelligence services so that they wouldn’t impact Big Tech companies. Cloud providers such as Microsoft and Google have an outsize influence on generative AI. OpenAI, the most well-known generative AI developer, works with Microsoft, which also invested in the company. Google released its large language model Bard and is developing other AI models for commercial use.

The group wants to use the same method as in the pharmaceutical industry, where companies submit to regulation before using an artificial intelligence model on the public.

nonprofits do not call for a single regulatory body Splitting up the rules may or may not make regulations more flexible, according to Lehrich.

Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.

Source: AI companies must prove their AI is safe, says nonprofit group

Is OpenAI ready to solve the content moderation dilemma? A comment on an article by Xiong Guo and Jiao Weiss

He says that there needs to be a difference between the different stages of the artificial intelligence supply chain.

OpenAI may help to tackle a problem that has been caused by its technology. Generative AI such as ChatGPT or the company’s image creator, DALL-E, makes it much easier to create misinformation at scale and spread it on social media. Although OpenAI has promised to make ChatGPT more truthful, GPT-4 still willingly produces news-related falsehoods and misinformation.

GPT 4 can help develop a new policy in an hour. It usually takes weeks or months to draft, label, and gather feedback. Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.

After nearly two decades of modern social media and even more years of online communities, content moderation is still one of the most difficult challenges for online platforms. Meta, TikTok, and many other websites rely on armies of reviewers who look out for traumatizing content. Most of them are located in low-income countries, and they struggle with mental health as they only receive a minimal amount of mental health care.

Source: OpenAI wants GPT-4 to solve the content moderation dilemma

Human and Click Workers in OpenAI: When Humans and Machines Can’t Get It Wrong, or Why We Shouldn’t

OpenAI relies heavily on human and clickworkers. Many of them are in African countries such as Nigeria and Uganda. The job is not well paid and the texts can be disturbing.

The gray area of misleading, wrong, and aggressive content is a great challenge for automated systems. Humans and machines can’t agree on the meaning of posts, and machines often get it wrong. The same applies to satire or images and videos that document crimes or police brutality.