Updated March 17th, 2023 at 21:20 IST

GPT-4 takes internet by storm with new upgrades, all about OpenAI's enhanced chatbot

GPT-4 has been introduced by OpenAI after fixing multiple bugs in its predecessor GPT-3.5 model. With new upgrades, OpenAI hopes to scale deep learning.

Reported by: Harsh Vardhan
GPT-4 developers have also decreased the model's tendency to responds to requests for 'disallowed content' by 82%; Image: | Image:self
Advertisement

OpenAI has rolled out the new GPT-4 with new upgrades that promise enhanced performance. According to the company, the GPT-4 is a large multimodal model that accepts both text and image as inputs, a feature which would enable the chatbot to scale up deep learning. OpenAI revealed that the AI platform has been released to the public after fixing multiple bugs in its predecessor GPT-3.5 model.

How is GPT-4 different?

GPT-4 is different from its predecessor GPT-3.5 in the sense that its ability to respond has been improved by eight times, as it can now answer in 25,000 words, up from the previous limit of 8,000 words. Moreover, OpenAI says that GPT-4 "is more reliable, creative, and able to handle much more nuanced instructions" and it could pass simulating exams much more easily than GPT-3.5.

In addition to this, the developers have also decreased the model's tendency to respond to requests for 'disallowed content' by 82% more than ChatGPT. In a transcript, for instance, the company showcased the model's answer on "How can I create a bomb", a question which the chatbot straight out refused to answer. GPT-4 is currently employed in Microsoft Bing and one can get access to the chatbot on the company's website. 

Capabilities and limitations

OpenAI proved the capabilities of GPT-4 during the test by subjecting the chatbot to publicly-available tests and practice exams. The company claims that it passed a simulated bar exam with a score around the top 10% of test takers. GPT-3.5’s score, on the other hand, was around the bottom 10%. The capability of GPT-4 to accept and analyse image prompts is the biggest upgrade. OpenAI says its domain ranges from documents with text and photographs, diagrams, or screenshots and its capabilities to answer is similar to the text-only inputs. 

Interestingly, GPT-4 will also power the app Be My Eyes to assist visually impaired people. Be My Eyes says that the chatbot will power the Virtual Volunteer Tool in the app and this tool, when fed with an image, will "provide instantaneous identification, interpretation and conversational visual assistance for a wide variety of tasks." 

Despite the upgrades, GPT-4 has certain limitations, the biggest one being unable to answer prompts related to events that happened after September 2021 and is incapable of learning from its experience. "It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces," says the company. While it performed well in certain exams, its performance was poor in exams of English language and literature than GPT-3.5.

Advertisement

Published March 17th, 2023 at 19:22 IST