ChatGPT 4.0 is not as smart as it may seem
Like everyone else, I initially got really excited with the new release of GPT4, until I played with it a little more to form a less excited opinion about it.
Did you ever teach in school or university? most teachers will experience a “parrot” student, a student that is really good at repeating what was learnt, but is not so smart and not having any critical thinking. This is sort of the type of thinking that chatGPT is providing: it is really good at spitting man stream opinions, sometimes with mistakes, but nothing greater than this.
My son is a Phd student for applied maths at Melbourne University, he mentioned that in one of the conferences an excited mathematicians claimed that “GPT can solve mathematical problems” and that “mathematicians days are numbered” – we will no longer need mathematicians because AI will replace them.
Well this is nonsense, and you don’t need to play with chatGPT to understand it, it is enough to know how ML (Machine Learning) works to understand that it is not true: ML needs a human to learn from, its knowledge is based in past data, it is learning what we have invented, by definition, it can’t invent by itself. My son then mentioned that the “problems” that were presented to chatGPT were all previously solved, and that it couldn’t do even not thinking of a 1 years old – it just can’t think. In fact, even parrots can think – but chatGPT can’t!
And there is more to it. ChatGPT not just does not think, it is also lying, and making basic mistakes. Not only that it can’t solve problems, it does not even know how humans solved problems. Here is an example – I asked it if it can solve
featured image photo by cottonbro studio
parrot photo by kendra coupland