Is AI actually going to be as radical as most people claim?

I actually don’t address AI in specific that much in the book Stop Nobody Move and the reason for that is that I don’t think, considering the time-frame that this book is covering (5 to 10 years ahead), there is a reason to do it. But here is an answer.

Well, maybe AI will be as radical as some (not “most”) people claim. But if so, like we would get “general AI” (roughly speaking: become able to totally substitute people), I do think it is further away into the future than the scope this book is covering.

You should, of course, get your own opinion on AI and/but if you do, I suggest you try to get your hands on what AI, as a “technology” actually does, can do, and the way it actually functions.

The way I see it is this: We have been able to create digital programs for long. They help computers do what the “can do”. And that is to go left, or right, if you tell it to (it is either 1 or 0). And in order to create that we develop “rules”. What AI is doing is finding these rules for us…with the help of us just giving it the input, and output, it needs in order to be able to find the rules. We give it a cat-picture and tell it; “cat”. And we do that gazillion times. Finally it can tell (not recognize) if a new picture is a cat or not. That is not “rule-making”, that is “pattern-recognition” (as I see it the best way to understand AI is “as” pattern-recognition”).

How much of the human endevour can then be understood as “pattern-recognition”? Quite a lot actually (and maybe far more than in terms of “rule-following”). But in total: does these two terms cover all that we, as humans, actually do? I personally don’t think so. Humans are far more complicated than that (for instance do we have memory). The word “intelligence” (the last term in “AI”) is, at least as I see it, far more complicated than to be understood simply as “rules and patterns”. There is something “more” we humans do so to speak (but we don’t really, still, seem to know what that is).

A good way to realize what I just said is trying the following: How many “cat-pictures” do a computer need to “see” before it can “tell” us that a new cat-picture actually is a cat? And how many (few actually!) “cat-pictures” do a human need to see to realize that the next picture is a cat? For some puzzling reason the human, still, tend to outperform computers when this becomes the challenge. Then how come that is the case? Well, the answer is; We still do not know. But it seems, at least to me, to be something more than just “digitalized pattern-recognition” the way computers does it involved here.

Now, how to handle AI then? My suggestion is simple: handle it the same way as we ones upon a time did learn the internet. Start small. Do what you, at this moment, can do with it, and learn. Because of course it is already, and wll, become very important. During the road you will learn more, and do more, and AI will become able to achieve more for you. Finally (likely), but maybe not that much yet, AI will start to disrupt industry the same way as the internet has. But it will not happen today, so there is time. And…I don’t think that AI will be enough for substituting us all.

Having that said: AI will still be of extremely big importance to industry the upcoming years. So of course: we should care about it. AI might, in the end, just be as radical to us as the “internet” has become – but not yet.