The Future of Humanity
Possible future trajectories of the human species

Should we be concerned about the future of humanity concerning AI?

We are concerned. I think I am.

But what should we do about it?

that is a very hard question. At least try to slow down the process of making it to be able to make it safe.

What an interesting time to be alive!

Ok. I saw this video the other day. Japanese engineers are introducing AI technology as a possibility to use in video games and animations, to create creepy movements ‘non humanlike’. They are presenting it to the illustrator Hayao Miyazaki and he then shares his thoughts on an artificial intelligence.

And what are his thoughts?

You should really watch the video, it is an experience.

Exciting!

What they show him makes him think of his friend that is disabled and just completely shuts them down, “I can’t watch this stuff and find it interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it.
I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.”*

That is intense, an insult to life itself.

To me, this video somehow says so much about Artificial intelligence. It relates to the possibilities and consequences and shows opposing views. After people start getting into this topic what comes to mind or the question is often, should we be concerned? The engineers are excited and proud of what they have made, they see the possibilities of having a machine learn and produce something that no human could imagine. Miyazaki relates it to human suffering, to how people feel and react to this huge upcoming thing.

That makes sense, I think I relate a bit more to the engineers, at least I don’t think it is an insult to life itself. It is a cool thing that could do a lot of good.

Yes, Miyazaki also says
“I feel like we are nearing to the end of times. We humans are losing faith in ourselves.”*

Depressing. Maybe true.

Ok, now I want to give you a little taste of exciting things! Taste of what the future might hold! I have been finding out some interesting things.*

What kind of things?

The human brain has some capabilities that the brains of other animals lack and that is why we kind of dominate the planet. No claws, no sharp or strong teeth but we have cleverer brains! They give us an advantage in general intelligence. And that is how we have developed language, technology and complex social organisation.

Yes yes but what was it about AI.

Oh, I was getting to that. If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become super powerful! Nick Bostrom says “as the fate of the gorillas now depends more on us humans than the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.”

That is not that exciting.

The good thing about it is though, we have an advantage! We get to build the stuff!! In principle, we could build a kind of superintelligence that would protect human values.

Then we will of course just do that! It’s just a computer, it can’t do anything it’s not programmed to, that is the essence of a computer, it is difficult to get it to do something by chance. A computer follows its instructions blindly and is therefore completely predictable. A computer that doesn't follow its instructions in this manner is broken, sentient.

Yes, but there is a problem! The control problem. Of how to control what the superintelligence would do, it will be quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.

Yes, If we make superintelligence. What will we be to each other? I mean us and them?

If we create machines that can learn and redesign and rebuild themselves and build themselves better. They won’t need us anymore.

so that is the main problem then?

According to Nick Bostrom yes! (and elon Musk and other smart people). Bostrom says “This is quite possibly the most important and most daunting challenge humanity has ever faced. And whether we succeed or fail it is probably the last challenge we will ever face.”

But today we are not that close to getting there, you already said that.

Making a superintelligent AI? No, what we have now is more narrow intelligence, a few AI’s that are very good at specific tasks.

Like what?

The famous ones are Deep Blue, the chess computer that defeated the world champion Garry Kasparov in 1996 and Alpha Go, which is a narrow AI developed to play the board game Go which is said to be the most complex game humans have ever made, possesses more possibilities than the total number of atoms in the visible universe. Compared to chess, Go has both a larger board with more scope for play and longer games, and, on average, many more alternatives to consider per move.

Again with the atoms in the universe.

Yes, and Alpha Go managed to beat a professional Go player Lee Sedol.

I acctually heard about that one! The presenters at one point said ‘it wasn’t a human move!’ when Alpha Go made a weird decision.

Then there is also Watson, a question answering computer system capable of answering questions in natural language, trained specifically in the beginning to answer questions on the TV show Jeopardy. A lot of narrow AI’s have been made to help with day to day tasks.

I like that, I would love a perfect question answering AI. That could answer more complex things than just: Where is the closest supermarket.

Well, Google’s search machine is an AI, it learns from what we look for. Finds useful information in specialised domains. Then there are a few chatbots that are very good as well! They are trained to have conversations. Self driving cars are also AI’s.

Sweet!

It’s funny when you google Artificial intelligence, you only get images of a ‘kind robot’ light blue, blue, grey, lights, human hand shaking a robot hand. Because Google’s search engine is an AI, it is shows you itself. Being kind. The colour blue is a color of trust, peace, order and loyalty (according to ‘color psychology’). Also freedom, intuition, imagination, inspiration, sensitivity, wisdom, stability and most importantly, intelligence!

But how long until we then make a strong AI? You know one that could cause Existential risk to us humans?

Most specialists say that it could be around 20 years away. (but every year they add another year, so it has been 20 years for a while now!) Bostrom thinks that it might be this century and that is very likely.

Sweet.

*