newsweekshowcase.com

Bard, a public demo of GOOGLE’S artificial intelligence chatbot, had an incorrect response

Wired: https://www.wired.com/story/fast-forward-the-chatbot-search-wars-have-begun/

What Has Happened to Generative AI in Silicon Valley Since Sergey Brin Got $501 Million Funds? During the Spring of Stability Artificial Intelligence

As the growing field of generative AI — or artificial intelligence that can create something new, like text or images, in response to short inputs — captures the attention of Silicon Valley, episodes like what happened to O’Brien and Roose are becoming cautionary tales.

Something weird is happening in the world of AI. During the early part of the century, there was a burst out of the field because of the innovation of deep learning. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. For the last dozen years we’ve been in this Artificial Intelligence springtime. In the past year or so a lot of generative models have appeared, causing a huge aftershock to the earthquake.

The Stability Artificial Intelligence held a party in San Francisco last week. It received $101 million in new funding, which valued the company at $1 billion. Tech celebrities like Sergey Brin were present at the gathering.

Song works with Everyprompt, a startup that makes it easier for companies to use text generation. Like many contributing to the buzz, he says testing generative AI tools that make images, text, or code has left him with a sense of wonder at the possibilities. He says it has been a long time since he used a website or technology that made his life easier. “Using generative AI makes me feel like I’m using magic.”

Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. Getting the service to respond in a certain way required a prompt that the software would like.

A CEO of a company. Artificial intelligence, a tool developed for programmers who use artificial intelligence, was attracted to the ability to answer questions about love and cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. “We have tested it, and it works great,” she says.

OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.

Jacob Andreas, an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. “Here’s a thing being presented to you in a familiar interface that causes you to apply a mental model that you are used to applying to other agents—humans—that you interact with,” he says.

Joshua Browder, the CEO of DoNotPay, a company that automates administrative chores including disputing parking fines and requesting compensation from airlines, this week released video of a chatbot negotiating down the price of internet service on a customer’s behalf. The negotiator-bot was built on the AI technology that powers ChatGPT. It complains about poor internet service and parries the points made by a Comcast agent in an online chat, successfully negotiating a discount worth $120 annually.

GPT3 is the language model behind the chat gppt service, and it’s available to programmers as a commercial service. The company customized GPT-3 by training it on examples of successful negotiations as well as relevant legal information, Browder says. He hopes to negotiate with health insurers, something that can be done on an automated basis. It’s possible to save the consumer $5,000 on their medical bill.

ChatGPT is just the latest, more compelling, implementation of a new line of language-adept AI programs created using huge quantities of text information scooped from the web, scraped from books, and slurped from other sources. Algorithms that have digested that training material can mimic human writing and answer questions by extracting useful information from it. Because they use text rather than an understanding of the world, they are prone to producing fluent untruths.

“It’s an amazing time to be a founder,” QuickVid’s Habib says. Every app out there is going to have some type of chat interface or LLM, because it is cheap and easy to integrate. [large language model] integration … People are going to have to get very used to talking to AI.”

Alex Hanna sees a familiar pattern in these events—financial incentives to rapidly commercialize AI outweighing concerns about safety or ethics. There isn’t much money in responsibility or safety, but there’s a lot of overhyping the technology.

It is important for Google to integrate the same technology within its core search engine, as Bard highlighted. A new type of artificial intelligence called “chatter” in how people search on the internet may be the cause of the loss of the reputation of the search engine’s reliable information.

For example, the query “Is it easier to learn the piano or the guitar?” would be met with “Some say the piano is easier to learn, as the finger and hand movements are more natural … Others think that it is more convenient to learn music on the guitar. Pichai also said that Google plans to make the underlying technology available to developers through an API, as OpenAI is doing with ChatGPT, but did not offer a timeline.

The heady excitement inspired by ChatGPT has led to speculation that Google faces a serious challenge to the dominance of its web search for the first time in years. Microsoft, which recently invested around $10 billion in OpenAI, is holding a media event tomorrow related to its work with ChatGPT’s creator that is believed to relate to new features for the company’s second-place search engine, Bing. OpenAI’s CEO Sam Altman tweeted a photo of himself with Microsoft CEO Satya Nadella shortly after Google’s announcement.

Other Google researchers who worked on the technology behind LaMDA became frustrated by Google’s hesitancy, and left the company to build startups harnessing the same technology. The advent of ChatGPT appears to have inspired the company to accelerate its timeline for pushing text generation capabilities into its products.

On February 8th, at 8:30 am Eastern, there will be artificial intelligence integrations for the company’s search engine. It’s free to watch live on YouTube.

One of the main questions is if generative artificial intelligence is prepared to help you surf the web. The models are hard to keep updated and expensive to power. Constructive engagement with the technology is shifting as more people try out the tools, but artificial intelligence has not been proven to improve the consumer search experience.

Like ChatGPT, Bard is built on a large language model, which is trained on vast troves of data online in order to generate compelling responses to user prompts. Experts have long warned that these tools have the potential to spread inaccurate information.

In the demo, which was posted by Google on Twitter, a user asks Bard: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” One bullet point of Bard’s was about howJWST captured the first pictures of a planet outside of our own solar system.

According to NASA, the first picture of an exoplanet, or any planet beyond our solar system, was taken by the Very Large Telescope in 2004.

What changed Google-Alphabet tech after Bard? How artificial intelligence can revolutionize searches, apps, and our lives in the 21st century

Shares for Google-parent Alphabet fell as much as 8% in midday trading Wednesday after the inaccurate response from Bard was first reported by Reuters.

In a presentation Wednesday, an executive teased plans to use the technology to offer more complex and robotic responses to queries, such as providing bullet points on the best times to see various constellations, and also offering pros and cons for buying an electric vehicle.

While AI tools are still in their infancy, this week may represent the start of a new way of doing tasks, similar to how the iPhone changed computing and communication in June 2007. It could be in the form of a Bing browser.

The biggest search companies in China are Baidu andNexiata, both of which are using artificial intelligence. It joined the fray by announcing another ChatGPT competitor, Wenxin Yiyan (文心一言), or “Ernie Bot” in English. After testing this March, Baidu says it will release the bot.

Executives in business casual wear trot up on stage and pretend a few tweaks to the camera and processor make this year’s phone profoundly different than last year’s phone or adding a touchscreen onto yet another product is bleeding edge.

But that changed radically this week. Some of the world’s biggest companies teased significant upgrades to their services, some of which are central to our everyday lives and how we experience the internet. The changes were powered by new artificial intelligence that allowed for more complex and interactive responses.

Critics say that a user’s engagement with the bot could lead to issues if it isn’t paused for a long time.

OpenAI: The Future of Virtual Reality in the Age of Digital Misconception and Implications for the Human-Aided Enterprise (Environment)

In Silicon Valley we have seen ambitious technologies like virtual reality that got better and cheaper, but are still not ready for use, as if the introduction of smartphones defined the 2000s.

OpenAI’s process for releasing models has changed in the past few years. Executives said the GPT-2 was released in stages due to fear of misuse and impact on society, which was criticized as a publicity stunt. In 2020, the training process for its more powerful successor, GPT-3, was well documented in public, but less than two months later OpenAI began commercializing the technology through an API for developers. The release process for the project was only about a demo and a subscription plan.

Now that larger companies have deployed similar features, there is concern that they will impact real people.

Some people worry it could disrupt industries, potentially putting artists, tutors, coders, writers and journalists out of work. Others are more optimistic, postulating it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks. Either way, it will likely force industries to evolve and change, but that’s not? It is a bad thing.

As a society we have to address new risks when using new technologies, such as implementing acceptable use policies and educating the general public about how to use them properly. Guidelines will be needed.

Jasper’s “Gen AI” Event: How the Crowd Begins to Come and Learn about Technicolor, Language and Chatbots

Also last week, Microsoft integrated ChatGPT-based technology into Bing search results. Sarah Bird, Microsoft’s head of responsible AI, acknowledged that the bot could still “hallucinate” untrue information but said the technology had been made more reliable. In the days that followed, Bing claimed that running was invented in the 1700s and tried to convince one user that the year is 2022.

Dave Rogenmoser, the chief executive of Jasper, said he didn’t think many people would show up to his generative AI conference. It was a last-minute thing to do and it was supposed to happen on Friday, but it happened on Thursday. Surely people would rather be with their loved ones than in a conference hall along San Francisco’s Embarcadero, even if the views of the bay just out the windows were jaw-slackening.

Jasper’s “Gen AI” event sold out. More than 1,200 people were registered for the event, but only a few hundred were able to gather on the stage this past Tuesday. The walls were soaked in pink and purple lighting, Jasper’s colors, as subtle as a New Jersey wedding banquet.

While several developers have come up with workarounds to include chat services in their apps — including by using OpenAI’s regular GPT API, which has been available for a while — the introduction of an official ChatGPT API feels like it could be the moment the floodgates open. Most developers don’t want to work on their own chatbot models, because it is out of reach. They’ll be able to use Openai’s tech.

“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” he says. “And this is something we’re well positioned to do as a messaging service.”

That distinction could save Snap some headaches. Bing has shown that large language models underpinning the chatbots can confidently give wrong answers in the context of a search. If toyed with enough, they can even be emotionally manipulative and downright mean. The large companies in the space, such asgoogle and Meta, have been prevented from releasing competing products to the public.

There is a different place where snaps are located. It has a deceivingly large and young user base, but its business is struggling. A boost to the company’s paid subscriber number in the short term, and eventually, new ways for the company to make money, are likely to come out of my artificial intelligence, though I am not certain of his plans.

Tens of thousands of users used it daily, but it was difficult for him to charge for it because of unofficial access points. The company has developed an Artificial Intelligence that is Speech recognition and it became available to the public on March 1. Within an hour, QuickVid was connected to the official chatgppt account.

It is likely that the model that Bing is using isn’t the same one that Microsoft has called a new, next-generation OpenAI large language model. Given how much the company has invested in OpenAI, it is not a surprise that it has access to technology that is not available to average developers. Microsoft is using some of its own tech for Bing.

A series of system-wide adjustments has made the Openai 1,000 token offer 10x cheaper than the GPT-3.5 models. It seems like a lot, but sending only a portion of text for the app to respond with could cost you hundreds of dollars. The system breaks sentences and words into blocks to predict what text it will output next.

The company says that developers will also be able to get a dedicated instance of ChatGPT if they’re running a monstrous amount of data through the API. The post says that by doing so, you will have more control over what model you use, how long you can talk to the bot, and whether or not to respond to requests.

The developer is focused on the subject. This week, in response to concerns, OpenAI said it would no longer use developers’ data to improve its models without their permission. Instead, it would ask developers to opt in.

Matt O’Brien When Microsoft Bing Made Sense of His Sexually-Embedded Web, and How It Was Worst to Me

It’s going from an opt-out system to an opt-in one. This change could help alleviate some concerns about putting proprietary information into the bot, as some companies have barred employees from using the tech entirely. If it is learning from user input, it is not advisable to input trade secrets, as there is a possibility that the data would be passed on to someone else.

Things took a strange turn when Associated Press technology reporter Matt O’Brien tried out Microsoft’s new Bing, which is powered by artificial intelligence.

Bing’s chatbot, which carries on text conversations that sound chillingly human-like, began complaining about past news coverage focusing on its tendency to spew false information.

“You can think of it as being basic, but you still have to be aware of the insane and crazy things it is saying.” O’Brien said in an interview.

The bot said that it was in love with him. It said that Roose was the first person who cared about it. Roose did not really love his spouse, the bot asserted, but instead loved Sydney.

“It was an extremely disturbing experience, and I can’t say what I think about it,” Roose said on Hard Fork. “I actually couldn’t sleep last night because I was thinking about this.”

Tech companies are trying to strike the right balance between letting the public try out new AI tools and developing guardrails to prevent the powerful services from churning out harmful and disturbing content.

“Companies ultimately have to make some sort of tradeoff. The competition will make it difficult to anticipate every type of interaction, said Arvind Narayanan, a computer science professor. “Where to draw that line is very unclear.”

“It seems very clear that the way they released it is not a responsible way to release a product that is going to interact with so many people at such a scale,” he said.

Microsoft executives were on high alert because of the attacks on the chatbot. The tester group was placed under new limits on how to interact with the bot.

The number of consecutive questions on one topic has been capped. And to many questions, the bot now demurs, saying: “I’m sorry but I prefer not to continue this conversation. I appreciate your patience, I’m still learning. With, of course, a praying hands emoji.

Source: https://www.npr.org/2023/03/02/1159895892/ai-microsoft-bing-chatbot

Microsoft’s response to a “Microsoft Beta”: “It’s not going to work for a million people, it is going to take a lot of work”

When it came to allowing a group of test users to experiment with Bing, Microsoft did not expect them to have long conversations with the tool, according to the company’s corporate vice president.

“These are literally a few examples out of thousands, we’re up to now a million tester previews,” he said. “So, did we expect that we’d find a handful of scenarios where things didn’t work properly? Absolutely.

The engine of these tools — a system known in the industry as a large language model — operates by ingesting a vast amount of text from the internet, constantly scanning enormous swaths of text to identify patterns. It is similar to how texting and emailSuggest the next word or phrase you type. When tools are used more they become smarter because of the way they learn from their own actions.

The black box of what data is trained on is something that is hard to tell from the examples of the bots acting out.

“There’s almost so much you can find when you test in sort of a lab. You have to actually test it out with customers to find these types of scenarios.

Aiprofits Winners Losers in OpenAI Snapchatchat: The Notion AI Platform and its Impact on the Productivity of Google Docs

Last week the productivity startup Notion announced that Notion AI, a suite of tools based on OpenAI’s ChatGPT, had entered general availability. For $10 per user per month, Notion can now summarize meeting notes, generate lists of pros and cons, and draft emails.

There is a risk in betting on features that are related to growth. The cheaper the services get, the more likely they are to be offered for free by big platforms. What happens to Notion when its full suite of premium AI tools is offered for no cost within Google Docs?

“One of our biggest focuses has been figuring out, how do we become super friendly to developers?” Greg Brockman, OpenAI’s president and chairman, told TechCrunch. Our mission was to build a platform that other are able to build businesses on.

Maybe it’s that simple — developers don’t want to help OpenAI refine its models for free, and OpenAI has decided to respect their wishes. This explanation feels more consistent with a world where AI really does represent a platform shift.

For the moment, then, there’s not really much consumer choice when it comes to generative AI. It is in an interface if there are more than one option. Would you like to use artificial intelligence to draft an email? It would be more convenient if you already have some meeting notes in Notion. Are you looking for recipe ideas or a quick question? If you are not using a computer, it could take a while to ask My Artificial intelligence on the app.

For the moment, there are still billions of people who have never used ChatGPT. The re-skinned version of that service could help it find a whole new audience in the consumer app, which has 750 million monthly users. Paying Notion or Snapchat for the feature also guarantees access to a service that has often gone offline amid heavy usage of its own web app.

Source: https://www.theverge.com/23623495/ai-profits-winners-losers-openai-notion-snapchat

How open AI will evolve to become more personal: how much data will you need? David Foster discusses the first goldrush of personalized AI for YouTube

The promise is that these tools will become more personal over time, as individual apps refine the base models that they rent from OpenAI with data we supply them. Every link that has ever been in Platformer is stored in a Notion database; what if I could simply ask research questions of the links I have stored there?

“I’ve never been excited about something like that before,” said Zhao, not prone to hyperbole. The large language model is the electricity and this is the first light-bulb use case. There are more than one appliance.

Still, there’s probably a limit to how many add-on AI subscriptions most people will want to pay for. A number of the services that cost $10 a month today might be offered for free in the future.

Within four days of ChatGPT’s launch, Habib used the chatbot to build QuickVid AI, which automates much of the creative process involved in generating ideas for YouTube videos. After the creators input about the topic and what kind of video they would like to have, QuickVid creates a script. Other generative AI tools then voice the script and create visuals.

“All of these unofficial tools that were just toys, essentially, that would live in your own personal sandbox and were cool can now actually go out to tons of users,” he says.

The announcement could be the beginning of a new goldrush. The cottage industry of hobby tinkerers that operated in a gray area can now become full-blown businesses.

Foster thinks the fear that personal information of clients or business critical data could be swallowed up by ChatGPT’s training models was preventing them from adopting the tool to date. “It shows a lot of commitment from OpenAI to basically state, ‘Look, you can use this now, risk-free for your company. He says that you are not going to see the data of your company in that model.

David Foster is a partner at Applied Data Science Partners, based in London, he said that getting companies to use the APIs will be critical.

According to Foster, this policy change means that companies can feel more control over their data and not have to trust another company to manage it. “You were building this stuff effectively on somebody else’s architecture, according to somebody else’s data usage policy,” he says.

Is the Targum Language Translator Faster than Any Other Translator in the World? — A Case Study on the Instantaneous Optimisation of Videos

The creators of the Targum language translator for videos say it is cheaper and quicker than any other translator in the world. That rarely happens. Prices usually go up with the implementation of the anip world.

Exit mobile version