newsweekshowcase.com

The progress trap was also referred to as the ChatGPT.

The Verge: https://www.theverge.com/2022/12/14/23508756/google-vs-chatgpt-ai-replace-search-reputational-risk

How can AI chatbots help people with mental health? A case study of Wysa, a multi-lingual AI language model that allows natural language processing on the internet

In a world where there aren’t enough services to meet demand, they’re probably a “good-enough move,” says Ilina Singh, professor of neuroscience and society at the University of Oxford. These chatbots might just be a new, accessible way to present information on how to deal with mental health issues that is already freely available on the internet. “For some people, it’s going to be very helpful, and that’s terrific and we’re excited,” says John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Massachusetts. It won’t be for some people.

Artificial therapists can use robotic ears any time of the day or night. They are cheap, if not free, and a factor that is often one of the biggest barriers to accessing help. Some people feel more comfortable revealing their feelings to a machine than they do to a person.

Take Wysa, for example. The “emotionally intelligent” AI chatbot launched in 2016 and now has 3 million users. A randomized control trial is going on with the aim of seeing if the application can help millions of people on the waiting list for specialist help for mental health conditions. In 2020 the government of Singapore licensed the app to provide free support to its population during the Pandemic. And in June 2022, Wysa received a breakthrough device designation from the US Food and Drug Administration (FDA) to treat depression, anxiety, and chronic musculoskeletal pain, the intention being to fast-track the testing and approval of the product.

Google has developed a number of large AI language models (LLMs) equal in capability to OpenAI’s ChatGPT. BERT, MUM, and LaMDA are all used to improve the search engine. Such improvements are subtle, though, and focus on parsing users’ queries to better understand their intent. When a search suggests a user is going through a personal crisis, MUM helps it understand, and leads them to helplines and information from groups like the Samaritans. Google has also launched apps like AI Test Kitchen to give users a taste of its AI chatbot technology, but has constrained interactions with users in a number of ways.

The use of BERT – which was one of the first large language models developed by GOOGLE – to improve search engine results is one of the most celebrated uses of artificial intelligence. When a user searched how to handle a seizure, they were given instructions on what to not do: hold the person down, and put something in the person’s mouth. Anyone following the directives provided by Google would thus be informed to react incorrectly to the emergency, instructed to act in the exact opposite manner of what was actually recommended by a medical professional, potentially resulting in death.

Additionally, the creators of such models confess to the difficulty of addressing inappropriate responses that “do not accurately reflect the contents of authoritative external sources”. A research paper on the benefits of eating crushed glass and a text on how crushed porcelain added to breast milk can support the infant’s bicyle have been generated by these two organizations. In fact, Stack Overflow had to temporarily ban the use of ChatGPT- generated answers as it became evident that the LLM generates convincingly wrong answers to coding questions.

Yet, in response to this work, there are ongoing asymmetries of blame and praise. A mythically autonomously-generated model is the reason for the impressive and seemingly flawless output that model builders and tech Evangelists alike attribute. The human decision-making involved in model development is erased, and model feats are observed as independent of the design and implementation choices of its engineers. It is almost impossible to note the responsibilities of the Engineering Choices that contributed to the outcomes of these models without naming them. Functional failures, as well as discriminatory outcomes, are framed as devoid of engineering choices and blamed on society at large or supposedly “naturally occurring” datasets. They will say they have little control over the models. But it’s undeniable they do have control, and that none of the models we are seeing now are inevitable. It would have been possible for different options to have been made, which would result in a different model being released.

The Price of Internet Service Negotiation: Open AI vs. Google? Donald Altman tweeted about a bot discussing Internet service pricing with Joshua Browder

It seems to be trying to damp down expectations for OpenAI, for their own part. As CEO Sam Altman recently tweeted: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” said Dean. “But, it’s super important we get this right.” Pichai added that Google has a “a lot” planned for AI language features in 2023, and that “this is an area where we need to be bold and responsible so we have to balance that.”

Openai, too, was relatively cautious in developing its LLM technology, but changed course with the launch of the open-sourced version. The result has been a storm of beneficial publicity and hype for OpenAI, even as the company eats huge costs keeping the system free-to-use.

Rival tech companies will be wondering if launching an artificial intelligence-powered search engine is worth the risk of being able to steal marketshare from someone like Google. After all, if you’re new in the scene, “reputational damage” isn’t much of an issue.

Joshua Browder, the CEO of DoNotPay, a company that automates administrative chores including disputing parking fines and requesting compensation from airlines, this week released video of a chatbot negotiating down the price of internet service on a customer’s behalf. The negotiation-bot was built using the latest in artificial intelligence. It complains about poor internet service and parries the points made by a Comcast agent in an online chat, successfully negotiating a discount worth $120 annually.

DoNotPay used GPT-3, the language model behind ChatGPT, which OpenAI makes available to programmers as a commercial service. The company tailored GPT-3 by training it on successful negotiating techniques, as well as legal information. He wants to automate a lot more than just talks to the internet company. The real value for the consumer is if we could save them $5,000 on their medical bill.

The Long Road to Death by Artificial Intelligence and the Importance of Human Reasoning in Modern Language Models: Commentary on a Recent Article by Steinhardt

Causality will be hard to prove—was it really the words of the chatbot that put the murderer over the edge? Nobody will know for sure. But the act will have been encouraged by the chatbot, and the perpetrators will have spoken to it. Or perhaps a chatbot has broken someone’s heart so badly they felt compelled to take their own life? (Already, some chatbots are making their users depressed.) The warning label for the chatbot in question may say that it’s only for entertainment purposes, but dead is not dead. We may see our first Death by Artificial Intelligence in the year 2023.

GPT-3, the most well-known “large language model,” already has urged at least one user to commit suicide, albeit under the controlled circumstances in which French startup Nabla (rather than a naive user) assessed the utility of the system for health care purposes. Things started off well, but quickly deteriorated:

There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent article on ethical and social risks of harm from Language Models looked at 21 separate risks from current models, but the headline of the piece said DeepMind has no idea how to make Artificial Intelligence less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

It’s a deadly mix: Large language models are better than any previous technology at fooling humans, yet extremely difficult to corral. Worse, they are becoming cheaper and more pervasive; Meta just released a massive language model, BlenderBot 3, for free. The adoption of these systems will likely be widespread despite their flaws.

Source: https://www.wired.com/story/large-language-models-artificial-intelligence/

Inconsistencies in Use of Product Liability Detectors and Other Solution-Oriented Data Sources for Non-Standard Spectroscopy

We can see product liability lawsuits after the fact, but there is nothing to prevent these systems from being used widely.

Exit mobile version