Is AI failing? (Part 1)

Is AI failing? (Part 1)


Statistics are out there and people say numbers don’t lie. About 80% of all AI projects fail.

Is AI a complete failure? What is the problem with it?

There is no problem at all. Not with AI anyway.

The problem lies in the high expectations that have been built since ChatGPT was launched.

Up until this point in human history, Artificial Intelligence was a subject for academics. Ordinary people had never had such direct contact with this topic.

Suddenly ChatGPT was there, talking to them. Giving coherent answers. Writing organized, rational texts. And ordinary people thought, “Hey, that’s real intelligence!”

Academics knew, of course, it was just a LLM, a large language model trained to give those answers, write those texts. But who listens to academics anyway? Fantasy was much more fascinating, much more romantic, than reality.

Within a week of ChatGPT’s appearance, YouTube was filled with videos about how to get rich with AI, how to write the perfect prompts, how to discover life, the universe, and everything. And the answer was no longer a simple “42”. Forget that Douglas Adams! The answer now was in pages and pages of text written by that LLM.

Soon entrepreneurs (and who isn’t an entrepreneur these days?) started creating their own wonderful projects using Artificial Intelligence. Fanciful ideas, based only on the hype about ChatGPT. These are the projects that are failing now!

This is not the first time this has happened. I am 56 years old now and have been working in software since 1984. Exactly 40 years. And this means that I was in the market in the late 80s, when another shock wave of Artificial Intelligence hit the world: Expert Systems.

I still remember articles like “In five years we will no longer need doctors, because all medical knowledge will be condensed into computers and Expert Systems will perform consultations and diagnose illnesses.”

Almost 40 years later, we still need doctors. And no one talks about Expert Systems anymore.

In those pre-Internet days, we ran into the limitations of the computing power of computers. We had the right idea at the wrong time. We knew that it was possible to create elaborate knowledge bases, we knew that it was possible to enter information about a specific branch of knowledge into these knowledge bases. But we had to deal with old PC-XTs with 16 megabytes of RAM, an 8088 processor and 200 megabyte hard drives.

We are now in the age of distributed computing. Memory has become cheap. Hard drives have become cheap. And who cares about them when we have server farms and the cloud? Theoretical modeling has also evolved significantly. What on earth is going wrong?

Our expectations have grown too high! No one wants anything less than the solution to life, the universe and everything. And no one accepts “42” as an answer anymore!

It turns out that, with all due respect to the creators of ChatGPT, and its imitators that spring up from the ground every day, true Artificial Intelligence is still a long way off. It is much more than Machine Learning and LLMs. And the researchers who work on this know what I’m talking about. Those who don’t know are the ordinary people, who were excited by ChatGPT’s answers, because they discovered that ChatGPT could write better than they could. Which, by the way, wasn’t that hard.

Assuming that one day we can truly define what intelligence is and that, based on this definition, we can create something similar, we will still have to face the reality that this artificial intelligence will be like our own. And when I think that we are destroying our planet and reducing the chances of survival of our species day by day, I wonder if we are really that intelligent, and if it is really worth reproducing our intelligence in machines.

Yes, if we ever create a true artificial intelligence, it will be based on our own intelligence, with its strengths and weaknesses. Of course, it will have access to more data than an average person can gather in a lifetime, but it will also have to process it with the equivalent of a human mind.

I had the opportunity to meet one of the pioneers of Artificial Intelligence research, Professor Marvin Minsky.

Professor Marvin, in recent decades, was more concerned with defining intelligence and understanding its processes than with its computational modeling. For him, the limitation of research in this area was that we were not sure what we were trying to model. We needed to understand the human mind before we could replicate it.

With all due respect to Professor Marvin, a brilliant man and a truly worthy human being, I will go a little further.

I believe that we need to improve our intelligence before it becomes worthy of being replicated.

As long as we have hunger, wars, terrorism by groups and states, children dying of starvation and other evils, are we really that intelligent? Is it really interesting to replicate something that is not working well? That is not solving the problems of our world?

Yes, I know. This all got a little too philosophical. That’s how I feel today. But since this is the first article in a series I plan to write on the subject here, I’ve allowed myself the freedom to address general issues before getting to the heart of the matter.

I hope this introduction has given you food for thought. And I would be very happy to receive comments and criticism. As I said, I am 56 years old and therefore I am from the generation when criticism was not seen as a personal attack and we did not get depressed when we were criticized.

See you in the next article!

Ed de Almeida
edvaldoajunior@gmail.com
+55(41)99620-8429



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.