The new Stability Video Diffusion model Runway and the lack of truth in war and peace: A test of Anthropic with Claude 2.1
While you can access Claude 2.1 today, the new Stable Video Diffusion from Stability AI is just open to researchers for now, before the general public is allowed to try it out. In contrast to the AI tools released by OpenAI, Stability AI focuses on launching open source software. The text-to-video model Runway is similar to the company’s new artificial intelligence tool.
Anthropic’s latest model, Claude 2.1, was given two key updates. One is the ability to upload more data at once to the chatbot and fewer lies. The token limit for Claude is now set at 200,000 tokens, which is approximately the length of a 500 page book. Sorry, but you have to wait until the next updates to analyze all of War and Peace in a single prompt. To compare, the rate limit for the GPT-4 Turbo model, announced by Altman pre-firing, is capped at 128,000.
And, Anthropic claims that the new Claude is more likely to admit when it’s unsure of an answer, rather than fibbing with the utmost confidence. “We tested Claude 2.1’s honesty by curating a large set of complex, factual questions that probe known weaknesses in current models,” reads the company’s blog post. There is a lack of truth in some stories, often described as hallucinations.
The Launches of Stable Video Diffusion and Claude 2.1: A Debacle at the Company that Built ChatGPT
The text-to-video model spits out an animation when you input a prompt, which can range from eerie to downright disturbing. Stable Video Diffusion can transform your images into videos by adding motion.
While this isn’t technically a new feature from OpenAI, the company rolled out ChatGPT with voice capabilities to everyone in the short period while Altman was out as CEO. The feature was restricted to users who paid for a subscription service.
The Software Developers at OpenAI took another step towards their goal of “multimodality” by giving the bot the ability to hold a conversation. The idea is that a chatbot can be even more powerful if it can accept inputs and provide outputs in multiple mediums, like voice, text, and images. It is unknown when it will learn how to smell.
Every week there’s something new to be launched from one of the major players. “My guess is that the launches of Stable Video Diffusion and Claude 2.1 were probably just a coincidence.” says Dharmesh Shah, who co-founded HubSpot and is an OpenAI shareholder.
It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.
A debacle at the company that built ChatGPT highlights concern that commercial forces are acting against the responsible development of artificial-intelligence systems.
“The push to retain dominance is leading to toxic competition. The managing director of the New York City-based research organization said it was a race to the bottom.
Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altman’s initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altman’s return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.
The chief scientist at OpenAI, who had been ousted by the board, shifted his focus towards a four-year project designed to ensure that superintelligences work for the good of humanity.
It’s unclear whether Altman and Sutsekever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.
With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown University’s Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.
The company’s releases have catapulted them to worldwide fame. The company employs a large language model called the GPT-3.5, which uses statistical correlations between words in billions of training sentences to generate fluent responses to prompt. The breadth of capabilities that have emerged from this technique (including what some see as glimmers of logical reasoning) has astounded and worried scientists and the general public alike.
The competitive landscape for conversational AI is heating up. The search engine giant has mentioned that more artificial intelligence products are on the way. Amazon has an artificial intelligence offering called Titan. Anthropic, founded in 2021, is one of the smaller companies that aims to compete with ChatGPT. Cohere is often- cited as one of the other more often-cited rivals.
West states that the only way for these start-ups to have a chance is if they rely heavily on the computing power of just three companies, namely Microsoft, Amazon and Google.
How fast can AI develop? A computer scientist’s perspective on the AI Safety Summit at the University of Toronto, Canada, and the OpenAI event held in November
Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. “If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes,” he says. Hinton declined to speak to Nature about the OpenAI events.
OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) — a deep-learning system that’s trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. West says that the jury is very much out on that issue. But some are starting to bet on it. He used to think that AGI could happen on the timescale of 50 to 100 years. “Right now I think we’ll probably get it in five to 20 years,” he says.
The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, there are some two dozen nations that have agreed to work on the problem.