Data Science Talent Logo
Call Now

ChatGPT By Francesco Gadaleta

Francesco Gadaleta PhD is a seasoned professional in the field of technology, AI and data science. He’s the founder of Amethix Technologies, a firm specialising in advanced data and robotics solutions. He hosts the popular Data Science at Home podcast, and over his illustrious career he’s held key roles in the healthcare, energy, and finance domains.

Francesco’s professional interests are diverse, spanning applied mathematics, advanced machine learning, computer programming, robotics, and the study of decentralised and distributed systems.

In this post, Francesco takes us on a deep dive into the GPT family of models. He explores what sets GPT apart from previous LLMs, and considers their limitations. While GPT models are powerful tools with huge potential, Francesco advises using them with caution:

As you know, ChatGPT is a large language model that’s been causing quite a buzz lately. But before we dive in, I want to make one thing clear: I’m not a fan of hype. I think it’s important to have a realistic view of what these models can and can’t do. So let’s take a closer look at ChatGPT and what we should expect from a model like this.

Personally, I wasn’t super excited about the GPT family of models, but I have to admit that I’ve been playing around with ChatGPT lately and it’s been a lot of fun. I’ve used it to create silly poems about my daily life and to joke around with friends and colleagues. But ChatGPT can also be used for more serious tasks, as long as you use it wisely.

That’s the key here: use ChatGPT with parsimony and always double check the answers it gives you. While it’s a powerful model, it’s not a silver bullet that can solve all your problems. In fact, as we’ll see later in this episode, ChatGPT can sometimes start guessing or inventing things that aren’t true, so it’s important to be careful when relying on its responses.

 

Okay, so let’s talk more about the guessing game and how it relates to ChatGPT. The guessing game was invented by Claude Shannon back in 1951 as a way to teach computers to understand language. The game consisted of guessing the next letter in a sequence, and it was a way to train computers to understand language at a basic level.

Nowadays, the GPT family of models, including ChatGPT, do the same thing but on a word basis. They try to guess the next word given a certain context. And to do that, they need to understand the context very well. This is why training these models is all about language understanding.

To learn more about the real-world applications of GPT, head over to our magazine and read the article in full:https://issuu.com/datasciencetalent/docs/the_data_scientist_mag_issue_3_digital

 

Back to blogs
Share this:
© Data Science Talent Ltd, 2024. All Rights Reserved.