Apple engineers have recently published a paper called “The illusion of thinking“. In this article the researchers observe phenomenons like precision collapse with tasks that surpass certain complexity threshold, among other things like regular models being more accurate than the last “reasoning” LLM models in easy tasks.
We tend to humanize machines. I don’t know why. Maybe is the need to socialize or Hollywood influence. But when we take a closer look at all this AI thing, we can see this is not even close to a human thinking process. The AI is good at using human knowledge that already has learned and summarize it or extract ideas from it, but it is not capable of solving new problems with non existing information. It lacks of creativity, initiative or intention. It is just a fancy photocopier.
I wish that new discoveries in this field will lead us to a real Artificial General Intelligence (AGI) someday soon, but with the technology we have today, that’s just a quimera.