However, it is no longer the case. People have been talking about this for months, so it’s likely already out, according to some reports.
Moreover, AI research aims to develop a system capable of teaching itself new skills.
In other words, it’s not just the ability to pick up skills like language translation automatically.
Emergence is the ability to perform something out of the blue. It occurs when more information is used during training, leading to the emergence of previously unobservable skills.
However, a self-learning AI is an entirely different beast and one that is not limited by the size of its training material.
Possible extra capabilities for GPT-4 over GPT-3
Robert Scoble tweeted on August 20, 2022, that OpenAI was giving GPT-4 beta access to a small group of people who were close to the AI company.
Since this is a kind of anecdotal evidence, a person’s view could be swayed by excitement or the fact that they don’t have a good way to test it.
Every year, language models get better, so users should expect better performance. If the training is mostly based on how people see things, the claims above could be a much bigger change than the change from GPT-2 to GPT-3.
On the other hand, a user was still skeptical, which led to more talk about how GPT-4 can make work done on GPT-3 useless.
This is almost impossible to plan for because we don’t know what GPT-4 will be able to do or how it will be trained. Also, more and more platforms will start to directly connect LLMs. Then what?
From Scoble’s claim to the company’s CEO’s talk about the Turing test, which asks if machines can think, things could have gone in an interesting direction.
Also, the Turing test has historical significance, which shows how smart machines can only be to a certain point. Researchers say that no artificial intelligence system can pass the test, so it’s clear that a sophisticated system like GPT-4 would put up a fight.
Putting an end to the first reason, the Turing test is generally thought to be out of date. It’s a test of lying so that an AI could pass it without being smart in any human way.
Igor Baikov wrote on Reddit that GPT-4 would be either very sparse or very big since the company has always made models that are dense. When put up against other popular models like LaMDA, GPT-3, and PaLM, it would be clear that it doesn’t mean anything.
It is likely that GPT-4 will be multimodal, meaning that it will accept audio, text, image, and even video inputs. Also, it is thought that audio datasets from Open AI’s Whisper will be used to make the textual data that GPT4 needs to learn.
In conclusion, two things about GPT-4 are reliable:
- OpenAI has been secretive to the point that the public knows almost nothing
- OpenAI won’t disclose a product unless it is safe
It is therefore now challenging to predict with any degree of accuracy what GPT-4 will look like or what it will be capable of doing.