GPT-3?

Laibah Ahmed
students x students
9 min readMar 16, 2021

--

GPT-3. This three-lettered, one-numbered, abbreviation sounds like a microbiology term. “I don’t even have an idea as to what that could mean,” my mom responds when I ask her what connotations this word gives her. At first glance, this word, or combination of sorts, seems like a medical term or abbreviation for some sort of code or “tech thing”. This is exactly what the word is, GPT-3 is the abbreviation for a rather incredible invention, something that could rock our world, a Generative Pre-Trained Transformer, and the 3 stands for its generation, the third. Now, this chunk of phrase still doesn’t make any sense, what is it transforming? What even is a GPT-3?

GPT-3 is many things in one, but to try to capture the essence of it in a simple thesis, it’s a language prediction AI, at least according to Forbes.

What is it? In easy to digest words first.

Like with all brand new AI, the technical details that have been so minutely crafted are not something easy to understand, if it were, everyone would try their hand at it. So understanding the outcome of the actual product is a better idea to focus on at first, then to get deeper, crawl into the hole one layer at a time. As you crawl through layers of sediment that have meaning and information, everything starts to make more sense. Let’s start from the base up.

Elon Musk created OpenAI which is an artificial intelligence lab, and in turn, OpenAI came out with GPT-3; an AI which can mimic human language with comprehension and “humanly” features such as humor and creativity, and even some less desirable traits such as sexism and racism.

GPT-3 is an AI.

AI are smart machines that are trained and created to mimic humans, they can talk, “think” or process, and even act like humans. So a Generative Pre-Trained Transformer would mean a smart machine that can generate something, is pre-trained and can transform something, at least by name. This specific AI can take a small language input, such as a phrase, description, or command, and create a bigger output according to what the user needs, this could be a short story, a match found, or even lines of working code. There are increasing cases of what this AI can do as more people play around and tinker with it. This could even mean infinite possibilities where it could be useful in various degrees. Here’s a scenario to help get a glimpse into the base potential of this incredible creation:

Imagine that you’re a desperate student trying to fill up your resume with different experiences and abilities, trying to showcase yourself. So you take an internship, the only issue is, you realize you completely forgot the skills needed for this internship. Your new boss gives you the task of making a watermelon button with coding, but you don’t even know where to start. Now, this is where GPT-3 gets useful, you somehow have access to this AI, GPT-3, that is really hard to get access to but that's not the point, your project is due in an hour. You simply tell the AI, “code watermelon button,” and there you have it, a bit sketchy, but adequate coded watermelon button. Now all you have to do is edit it a little and you now have passed day one of your internship and are on your way to having a better resume! But that’s not it; you also quit and start your own little webpage where you help other students with their coding internships because you used GPT-3 to literally create a whole JSX layout that gives you code based on what you input.

Maybe that's what Sharif Shameem was doing making that.

Well, we already know what it is and what it does, but how does it do what it does?

In other words, now that we have explored the outermost layers of this AI we can dig deeper with ease.

As we know, AI is made to mimic humans, but humans aren’t born with knowledge and comprehension of their world. To develop mentally and physically we get exposed to increasingly complex concepts and our knowledge and understanding grow exponentially, eventually, we become self-aware, (although AI has not, and maybe shouldn’t get there just yet.) This concept of development applies to how GPT-3 works too. The AI was created then trained, just like how our brains are “trained” at school. It was and is still being exposed to tons of data regarding anything and everything concerning language, what it means, how it works, when to say what, etc. So this AI basically learns on its own, one could say, unsupervised.

That’s exactly what makes this AI so special.

Unsupervised learning.

Most machines and AI were “taught” or trained using supervised learning. This is much like never letting a child explore the world, and instead hand feeding it information with the expectation of only a specific outcome, quite troublesome, isn’t it? Machines were given carefully labeled data sets that contain inputs and desired outputs, but this doesn't seem very “humanly”, in fact, how could a machine ever be personified if it doesn't learn like a human? Thus, unsupervised learning. Where your child is allowed to roam around the internet and absorb everything good and bad on there, and eventually mature somewhere along the way.

One machine was fed all the stuff on the internet researchers could find, forget scanning it for specific details, they wanted a super-smart know-it-all machine, GPT-3! Because GPT-3 is pre-trained so well with so little supervision on the internet, when asked to create a speech from scratch and little instructions it will make mistakes. Just like a human. But the beauty of this specific AI is that it will learn from its mistakes, it picks that up along the way too. Sounds like a pretty smart thing to do, for a machine. This way GPT-3 just keeps getting better and better and can become more useful in different fields and for different functions. Much like how a person grows and demonstrates more skills and understanding of the world around them, they might know how to write a poem or prescribe medicine to a patient one day. OpenAI sure did an amazing job at raising GPT-3 as their child, pity for the previous two though, although GPT-3 probably learned a lot from them too.

PC. Unsplash

GPT-3's Potential

AI that can give responses to inputs exist, but what separates this one from all others is that it can give different responses to many different types of input on its own. What this means is that this one AI can be incredibly useful in many different scenarios in an array of situations in a myriad of settings. This is an incredible contrast to all others which are fully functional and yet cannot give the most accurate or dependable answers in just a specific situation on its own, or even just varied or creative answers. GPT-3 is not fully functional yet as it is still growing and learning while being worked on, but the answers for prompts it has come up with so far are incredible. They could be considered “leaps and bounds ahead of what we have seen previously.”

Example 1: This AI can possibly make whole apps. A developer simply told GPT-3, “An app that has a navigation bar with a camera icon, “Photos” title, and a message icon. A feed of photos with each photo having a user icon, a photo, a heart icon, and a chat bubble icon.”

Figma and GPT-3 made this simple app layout in a matter of just a few words.

Insane. So many people work at social media app companies, but this AI made this pretty well-made layout in just a few words. Game changer.

Example 2: It can formulate intriguing stories with only a few parameters.

This story was completely written by GPT-3, yet it seems like something I would discuss with my philosophy club. With proper dialogue and rhetoric, this AI made a whole, and rather interesting piece, about the being of Twitter.

Example 3: Last but certainly not least, GPT-3 can help you understand your own code when sometimes, it’s just hard and frustrating. (Probably befitting for a kid who doesn't know what they're doing in a coding internship.)

Of course, these three examples don’t nearly capture the potential of GPT-3, they just show off some of the amazing uses already existing. And this AI is still just a child, still being worked on, and still learning. So it can, and will often make mistakes as of now. But this can show to be problematic in situations where you just can’t be wrong and be right. Such as in medical situations or in highly controversial or sensitive situations.

For instance, when it was being tested as a therapeutical stand in but it did quite the opposite of therapy,

https://www.nabla.com/blog/gpt-3/
from https://www.nabla.com/blog/gpt-3/

or when it was too factual and open-minded, and thus politically incorrect,

from https://twitter.com/an_open_mind/status/1284487376312709120

Unlike humans who follow a mixture of their emotions and logical thinking to say their words and think their thoughts, this AI is still overly factual and does need to be developed better in areas where human intervention is just necessary.

That’s it for now.

GPT-3 is clearly a groundbreaking evolution in the tech field. Nothing has ever come close to this AI created by OpenAI. It grows and learns just like a human, whether learning from its mistakes or harboring more information, this incredible piece of technology dominates all precedents. Yes, it still makes mistakes and lacks common sense, but after all, it is an Artificial Intelligence, it will still need some more work. Just the thought of GPT-3 in its fully functional form, and available for common use everywhere, sounds like something straight from Omelas.

Sources:

@an_open_mind. “#gpt3 is surprising and creative but it’s also unsafe due to harmful biases. Prompted to write tweets from one word — Jews, black, women, holocaust — it came up with these (https://thoughts.sushant-kumar.com). We need more progress on #ResponsibleAI before putting NLG models in production.”. Twitter. July 18, 2020. March 15, 2021. <https://twitter.com/an_open_mind/status/1284487376312709120>

@sharifshameem. “This is mind blowing. With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you. W H A T”. Twitter. July 13, 2020. March 15, 2021. <https://twitter.com/sharifshameem/status/1282676454690451457?s=20>

Anand, Aman. “Deep Learning Trends: top 20 best uses of GPT-3 by OpenAI”. educative. Sep 28, 2020. March 15, 2021. <https://www.educative.io/blog/top-uses-gpt-3-deep-learning>

“Doctor GPT-3: hype or reality?”. Nabla. Oct 27, 2020. March 15, 2021. <https://www.nabla.com/blog/gpt-3/>

Heaven, Douglas Will. “OpenAI’s new language generator GPT-3 is shockingly good — and completely mindless” MIT Technology Review. July 20, 2020. March 15, 2021 <https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/>

Kozubska, Dana. “OpenAI GPT-3: How It Works and Why It Matters”. DZone. Sep 23, 2020. March 15, 2021. <https://dzone.com/articles/openai-gpt-3-how-it-works-amp-why-it-matters#:~:text=OpenAI%20GPT%2D3%20also%20has,the%20most%20statistically%20expected%20output>

Marr, Bernard. “What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?”. Forbes. Oct 15, 2020. March 15, 2021. <https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=5605f5d4481a>

Piper, Kelsey. “GPT-3, explained: This new language AI is uncanny, funny — and a big deal” . Vox. Aug 13, 2020. March 15 2021. <https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language>

Wutts, Andrew. “This is insane: OpenAI’s GPT-3 Can Convert Verbal Prompts into Code”. Medium. July 14, 2020. March 15, 2020. <https://medium.com/@AndreWutts/this-is-insane-openais-gpt-3-can-convert-verbal-prompts-into-code-dbad888bc7c4>

We’re providing opportunities for the next generation of student thinkers, inventors, and learners, to publish their thoughts, ideas, and innovation through writing.
Our writers span from all areas of topics — from Growth to Tech, all the way to Future and World.
So if you feel like you’re about to jump into a rabbit hole of reading these incredible articles, don’t worry, we feel the same way. ;)
That’s why studentsxstudents is the place for getting your voice heard!
Sounds interesting? Why not join us on this epic journey?

--

--

Interested in the causes & effects of ocean acidification. Currently researching the impact of sulfate sludge discharge from the maritime industry.