HomeGadgetsWhat is Commentary by Azeem and holding out for GPT-4?

What is Commentary by Azeem and holding out for GPT-4?

It’s been 29 months since Open-AI released GPT-2, its large-language model that showed how powerful transformers-based neural networks can be. The quality of the natural text that GPT-2 made was very impressive. The GPT-3, which came after it and was bigger and more complicated, worked even better.

Since GPT-3 came out in 2020, a lot of new ideas and products have been based on it and other models like it. It’s not as simple as giving a model a text prompt and letting it spit out a lot of copy that sounds plausible but might not be true. We are seeing the same approach to images, movies, and other things based on prompts.

Now, there are a lot of rumours going around about GPT-4, the newest model from OpenAI that will be ready in three months. Even The Sun, a popular tabloid newspaper in Britain, has written about it. 

What can we hope for?

As the article in The Sun points out, GPT-4 may not follow the trend of getting bigger and bigger. Yann LeCun’s groundbreaking neural network, LeNet, had 60,000 parameters when it was made in 1998. (a measure of the complexity of the neural machine to do useful things). After 20 years, the first version of GPT from OpenAI had 110 million parameters. GPT-2 has 1.75bn, while GPT-3, which is two years old, has 175bn. More parameters mean better results. Modern multi-modal networks are even more complicated because they can go from text to image to text to text or other combinations. The ones with the most parameters are getting close to 10 trillion.

Sam Altman, the head of OpenAI, said earlier this year that GPT-4 wouldn’t be much more complicated than GPT-3, and it wouldn’t be multi-modal, either. Instead, it would just be text. This might disappoint people who thought the next version of GPT-4 would be full of singing and dancing. That OpenAI would show that neural networks can be made bigger and bigger until they reach the complexity of the brain’s connectome. That this scale would lead to logical thinking, an understanding of time, real-world skills, and the ability to deal with text, video, images, and audio with ease.

I am going to believe what Altman said earlier this year. GPT-4 won’t be much bigger than GPT-3, and it will only work with text. (But I’m not sure. Altman gave us a hint in a recent tweet, which you can see below.)

We’ll get a text generator that is better than the best we have now. If GPT-4 didn’t do things that other services can already do, there would be no reason to do it. What do you think it means? Even more realistic text output in different situations, from dialogue to long-form? Across a variety of styles?

Price-wise, it would make sense for GPT-4 to cost less than GPT-3 and other services. The price of text that is made by a computer has been going down quickly. Already, it costs as little as a half cent for about 700 words of output.

Related Article : How In Future GPT-4 Is More Potent In Comparison To The GPT-3

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img