newsweekshowcase.com

OpenAI wants GPT-4 to solve the problem of moderation

NPR: https://www.npr.org/2023/08/16/1194202562/new-york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl

OpenAI wants GPT-4 to solve the content moderation dilemma: A warning for users from Artificial Intelligence using openAI’s AI

Openai may help to tackle the problem caused by its technology. It is easy to spread misinformation on social media if you userative artificial intelligence like the company’s image creator, DALL-E. Even though OpenAI has promised to make ChatGPT more honest, GPT-4 still makes news-related falsehoods.

The GPT-4 can help develop a new policy in hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.

Content moderation is still a challenge after 2 decades of modern social media and online communities. Meta, Google, and TikTok rely on armies of moderators who have to look through dreadful and often traumatizing content. Most of them are located in developing countries with lower wages, work for outsourcing firms, and struggle with mental health as they receive only a minimal amount of mental health care.

Source: [OpenAI wants GPT-4 to solve the content moderation dilemma](https://tech.newsweekshowcase.com/the-content-moderation-dilemma-is-solved-by-gpt-4/)

The Times: Why OpenAI Smells Like The Times? Humans and Clickworkers Can Lose Their Jobs, and AI Judges Can Shut Down the News

Openai relies on human work and clickworkers. Thousands of people, many of them in African countries such as Kenya, annotate and label content. The texts can be disturbing, the job is stressful, and the pay is poor.

In particular, the gray area of misleading, wrong, and aggressive content that isn’t necessarily illegal poses a great challenge for automated systems. Humans struggle to identify these posts, and machines get it wrong. It is the same for depictions of crimes or police brutality.

Lawyers for The Times believe OpenAI’s use of the paper’s articles to spit out descriptions of news events should not be protected by fair use, arguing that it risks becoming something of a replacement for the paper’s coverage.

The Times is concerned that the news site can be a competitor in a way by answering questions based on the original reporting of the paper’s staff.

If, when someone searches online, they are served a paragraph-long answer from an AI tool that relies on reporting from The Times, the need to visit the publisher’s site is greatly diminished, said one person involved in the talks.

In case the judge finds that OpenAI stole articles from The Times, the court can order the company to destroy the data they had been authorized to use.

“Copyright holders see these instances are reckless, and AI companies see it as gutsy,” Vanderbilt’s Gervais said. “As always, the final answer will be determined by who ends up winning these lawsuits.”

The Times CEO said in June that tech companies should pay for using the paper’s archives.

In January, the paper’s chief product officer and a deputy managing editor wrote a memo to staff about an internal initiative to capture the potential benefits of artificial intelligence.

How unfair was Andy Warhol when he altered a photo of the Prince? A high-stakes opinion on how AI companies abused fair use doctrine

Sarah claims she never gave approval for the company to take a digital version of her 2010 memoir “The Bedwetter” from an illegal online library.

Legal experts say AI companies are likely to invoke a defense citing what is known as “fair use doctrine,” which allows for the use of a work without permission in certain instances, including teaching, criticism, research and news reporting.

The court decided that the digital library of books didn’t compete with the original works because it didn’t create a significant market substitute.

In it, the high court found that Andy Warhol was not protected by fair use doctrine when he altered a photograph of Prince taken by Lynn Goldsmith. Warhol and Goldsmith were selling the images to magazines.

Therefore, the court wrote, the original and the copied work shared “the same or highly similar purposes, or where wide dissemination of a secondary work would otherwise run the risk of substitution for the original or licensed derivatives of it.”

Writers cannot put confidential information in artificial intelligence tools and should use sources that do not contain artificial intelligence. AP staff are asked to avoid accidentally using AI content created to spread misinformation and should verify the veracity of the content they use.

Exit mobile version