Tag: Generative Pretrained Transformer

GPT-3 Accelerates AI Progress, but the Path to AGI is Going to Be Bumpy | Blog

OpenAI recently released the third generation of Generative Pretrained Transformer or GPT-3, the largest neuro-linguistic programming (NLP) model ever built. It’s fundamentally a language model, a machine learning model that can look at part of a sentence and predict the next word. It’s been pre-trained on 175 billion parameters in an unsupervised manner and can be further fine-tuned to perform specific tasks. OpenAI is an AI research organization founded in 2015 by Elon Musk, Sam Altman, and other luminaries. It describes its mission as: to discover and enact the path to safe Artificial General Intelligence (AGI).

GPT-3 is breaking the internet

There’s been a lot of talk around the power, capabilities, and potential use cases of GPT-3 in the AI community. As the largest language model developed to date, it has the potential to advance AI as a domain. People have developed all sorts of uses – from mimicking Shakespeare, to writing prose, to designing web pages. It primarily stood out due to:

  1. Foraying into AGI. The language model isn’t trained to perform a specific task such as sentence completion or translation, which is normally the case with ANI, the most prevalent form of AI we have seen. Rather, GPT-3 can perform multiple tasks such as answering trivia questions, translating common languages, and solving anagrams, to name a few, in a manner that is indistinguishable from a human.
  1. Advancing the zero-shot/few-shot learning mechanism in model training. This mechanism is a setup in machine learning wherein the model predicts the answer only from the task description in natural language and/or maybe a few examples, implying that the algorithm can showcase accuracy without being extensively trained for a particular task. This capability opens the possibilities of building lean AI models that aren’t as data-intensive and don’t require humongous task-specific datasets for training.


So, this seems nifty – what next?

In addition to the flurry of standard NLP use cases that have been in existence for a while, which GPT-3 has advanced drastically, GPT-3 also has the potential to intercept the more technical and creative domains, which will lead to the democratization of such skills by making these capabilities available to non-technical people and putting business users in control, primarily by:

  • Furthering no-code/low-code by making code generation possible from natural language input. This is a step toward the eventual democratization of AI, making it accessible to a broader group of business users and has the potential to redefine job roles and the skill sets required to perform them.
  • Generating simple layouts and web templates to full-blown UI designs, using simple natural language input, potentially creating disruption in the design sphere. 
  • Shortening AI timelines to market. Automated Machine Learning (AutoML) creates machine learning architectures with limited human input. The confluence of GPT-3 and AutoML has the potential to drastically reduce the time it takes to bring AI solutions to production. It will take significantly less time and human intervention to train a system and build a solution, thereby reducing the amount of time needed to deploy an AI solution in the market.

GPT-3 is great, but we’re not in Space Odyssey yet

The massive language model is not without pitfalls. Its principal shortcoming is that, while it’s good with natural language tasks, is has no semantic understanding of the text. It is, by virtue of its training, just trying to complete a given sentence, no matter what the sentence means.

The second roadblock to mainstream adoption of the model is the fact that it’s riddled with societal biases in gender, race, and religion. This is because the model is trained on the internet, which brings its own set of challenges given the discourse around fake news and the post-truth world. Even OpenAI admits that its API models exhibit biases, and those can often be seen in the generated text. These biases need to be corrected before the model can be deployed in any real-world scenario.

These challenges certainly must be addressed before it can be deployed for actual, enterprise-grade use. That said, GPT-3 will potentially traverse the same trajectory that computer vision made at the start of the decade to eventually become ubiquitous in our lives.

What are your thoughts about GPT-3? Please share with us at [email protected] and [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.