So far as we can tell, GPT-4 will be published pretty soon, likely in the first quarter of 2023, and will include dramatically improved performance over GPT-3.5.
For weeks, wild claims on social media about models with 100 trillion parameters have circulated without any basis in reality. To the contrary, recent months of AI progress with models like Chinchilla, Sparse Luminous Base, or OpenAI’s ChatGPT demonstrate that the number of parameters is not everything, and that architecture, data amount, and quality also play significant roles in training.
In September 2021, Sam Altman, co-founder of OpenAI, predicted that GPT-4 will not grow to be as large as GPT-3. Instead, as GPT-3.5 and ChatGPT are demonstrating, improvement can be achieved through the use of higher-quality data, superior algorithms, and finer-grained tweaking. Further contextualization is a strength of GPT-4.
Time for Chat GPT-4 will arrive when it can be carried out
Altman, speaking at a venture capital event, reiterated his stance on the impending GPT-4, saying the new model shouldn’t debut until it can be done “safely and responsibly.” Since GPT-2, OpenAI has been claiming this, so it’s not exactly breaking news.
But, as Altman pointed out, OpenAI will “release technology much more slowly than people would prefer.”
We’re going to wait a long time,” Altman remarked. No, that doesn’t sound like a lightning-fast GPT-4 takeoff.
In addition, Altman revealed that OpenAI is developing a video-centric generative AI system. Google has presented three different text-to-video systems, including Imagen Video, Phenaki, and a hybrid of the two. Make-a-Video, Meta’s text-to-video solution, was also shown.
Related Article : To What Extent Will ChatGPT Affect The Future?