newsweekshowcase.com

The biggest flaw of the chatGTP is its most charming trick

Wired: https://www.wired.com/story/openai-chatgpts-most-charming-trick-hides-its-biggest-flaw/

OpenAI, an AI-based chatbot for combining text generation and artificial intelligence: How to avoid hallucination in GPT-4

There is a new variant of GPT called GPT- 3.5, which can take a naturally phrasedquestion and answer it using it. The powerful AI model has a compelling new interface just about anyone can use, after this tweak unlocked a new capacity to respond to all kinds of questions. That OpenAI has thrown open the service for free, and the fact that its glitches can be good fun, also helped fuel the chatbot’s viral debut—similar to how some tools for creating images using AI have proven ideal for meme-making.

Reddy, CEO of Abacus. Artificial intelligence, a company that develops tools for programmers with artificial intelligence, was enamored with the ability to answer requests for definitions of love or creative new cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. She says it works great after testing it.

OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.

The system seems likely to widen the pool of people who could use the tools, according to Jacob, an assistant professor at MIT. He says the mental model you are used to applying to other agents, like humans, is being presented to you in a familiar interface.

And with those abilities come concerns. For instance, could GPT-4 allow dangerous chemicals to be made? Openai engineers fed back into their model to discourage GPT-4 from creating dangerous, illegal or damaging content, according to White.

False information is a problem. Luccioni says that models like GPT-4, which exist to predict the next word in a sentence, can’t be cured of coming up with fake facts — known as hallucinating. “You can’t rely on these kinds of models because there’s so much hallucination,” she says. And this remains a concern in the latest version, she says, although OpenAI says that it has improved safety in GPT-4.

Luccioni was dissatisfied with the assurances made about safety without access to the data used for training. “You don’t know what the data is. So you can’t make it better. I mean, it’s just completely impossible to do science with a model like this,” she says.

The mystery about how GPT-4) was trained is a concern for van Dis’s colleague at Amsterdam, psychologist Claudi Bockting. “It’s very hard as a human being to be accountable for something that you cannot oversee,” she says. “One of the concerns is they could be far more biased than for instance, the bias that human beings have by themselves.” Without being able to access the code behind GPT-4 it is impossible to see where the bias might have originated, or to remedy it, Luccioni explains.

Van Dis, Bockting and colleagues argued earlier this year for an urgent need to develop a set of ‘living’ guidelines to govern how AI and tools such as GPT-4 are used and developed. They are concerned that the pace of development won’t keep up with the legislation. The University of Amsterdam is to host an inviteal summit with representatives from UNESCO, the World Economic Forum and other organizations on 11 April to discuss these concerns.

Exit mobile version