A software program that ingests gigabytes of text can automatically generate whole paragraphs so naturally, they sound like a person wrote them. OpenAI’s GPT-3 is all the rage.
GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. Created by OpenAI, this 3rd version of the GPT-n series has a capacity of 175 billion machine learning parameters hence the quality of the text generated by GPT-3 is so high that it is difficult to distinguish from that written by a human, which has both benefits and risks.
The reason that such a breakthrough could be useful to companies is that it has great potential for automating tasks. GPT-3 can respond to any text that a person types into the computer with a new piece of text that is appropriate to the context. That means GPT-3 can conceivably amplify human effort in a wide variety of situations, from questions and answers for customer service to due diligence document search to report generation.
But OpenAI’s researchers and engineers that presented the original paper warned us of GPT-3's potential dangers which include "misinformation, spam, phishing, abuse of legal and governmental processes and social engineering pretexting" and called for research to mitigate risk. As of today, Microsoft has acquired an exclusive license to GPT-3, although Open AI hasn’t stopped the API access to its model, allowing people to test its amazing text-generation abilities.
It’s also no surprise that many have been quick to start talking about intelligence. But GPT-3’s human-like output and striking versatility are the results of excellent engineering, not genuine smarts. But when a new AI milestone comes along it too often gets buried in the hype. Even Sam Altman, who co-founded OpenAI with Elon Musk, tried to tone things down: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”