OpenAI, the San Francisco tech company that grabbed worldwide attention when it released ChatGPT, said Tuesday it was introducing a new version of its artificial intelligence software.
Called GPT-4, the software “can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities,” OpenAI said in an announcement on its website.
In a demonstration video, Greg Brockman, OpenAI’s president, showed how the technology could be trained to quickly answer tax-related questions, such as calculating a married couple’s standard deduction and total tax liability.
“This model is so good at mental math,” he said. “It has these broad capabilities that are so flexible.”
And in a separate video the company posted on lineit said GPT-4 had an array of capabilities the previous iteration of the technology did not have, including the ability to “reason” based on images users have uploaded.
“GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks,” OpenAI wrote on its website.
Andrej Karpathy, an OpenAI employee, tweeted that the feature meant the AI could “see.”
The new technology is not available for free, at least so far. OpenAI said people could try GPT-4 out on its subscription service, ChatGPT Plus, which costs $20 a month.
OpenAI and its ChatGPT chatbot have shaken up the tech world and alerted many people outside the industry to the possibilities of AI software, in part through the company’s partnership with Microsoft and its search engine, Bing.
But the pace of OpenAI’s releases has also caused concern, because the technology is untested, forcing abrupt changes in fields from education to the arts. The rapid public development of ChatGPT and other generative AI programs has prompted some ethicists and industry leaders to call for guardrails on the technology.
Sam Altman, OpenAI’s CEO, tweeted Monday that “we definitely need more regulation on ai.”
The company elaborated on GPT-4’s capabilities in a series of examples on its website: the ability to solve problems, such as scheduling a meeting between three busy people; scoring highly on tests, such as the uniform bar exam; and learning a user’s creative writing style.
But the company also acknowledges limitations, such as social biases and “hallucinations” that it knows more than it really does.
Google, concerned that AI technology could cut into the market share of its search engine and of its cloud-computing service, released in February its own software, known as Bard.
OpenAI was launched in late 2015 with backing from Elon Musk, Peter Thiel, Reid Hoffman and tech billionaires, and its name reflects its status as a non-profit project that would follow the principles of open-source software freely shared online. In 2019, it transitioned to a “capped” for-profit model.
Now, it is releasing GPT-4 with some measure of secrecy. In a 98-page paper accompanying the announcement, the company’s employees said they would keep many details close to the chest.
Most notably, the paper said the underlying data the model was trained on will not be discussed publicly.
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar,” they wrote.
They added, “We plan to make further technical details available to additional third parties who can advise us on how to weigh the competitive and safety considerations above against the scientific value of further transparency.”
The release of GPT-4, the fourth iteration of OpenAI’s foundational system, has been rumored for months amid growing hype around the chatbot that is built on top of it.
In January, Altman tamped down expectations of what GPT-4 would be able to do, telling the podcast “StrictlyVC” that “people are begging to be disappointed, and they will be.”
On Tuesday, he solicited feedback.
“We have had the initial training of GPT-4 done for quite awhile, but it’s taken us a long time and a lot of work to feel ready to release it,” Altman said on Twitter. “We hope you enjoy it and we really appreciate the feedback on its drawbacks.”
Sarah Myers West, the managing director of the AI Now Institute, a nonprofit group that studies the effects of AI on society, said releasing such systems to the public without oversight “is essentially experimenting in the wild.”
“We have clear evidence that generative AI systems routinely produce error-prone, derogatory and discriminatory results,” she said in a text message. “We can’t just rely on company claims that they’ll find technical fixes for these complex problems. ”