Introduction to Artificial Intelligence

What type of thing is artificial intelligence?

It is a form of synthetic intelligence. Man made, yet real. It is actual, not fake and not simulated. It is also a type of technology in the way that it is a computer system that performs some intellectual function. And it is a field or an academic discipline that is a branch of science and computer science.

Actual and not fake, is it ever real if it's only a computer?

It is still only a hypothesis. Since the invention of computers their capability has grown exponentially and humans have developed the power of computer systems, reduced their size, developed their diverse working domains and speed.

But how did AI research start?

While exploiting the power of the computer systems, curiosity lead people to wonder, “Can a machine think and behave like humans do?” There development of AI started with the intention of creating similar intelligence in machines that we find and regard high in humans. It is a branch of computer science that pursues creating computers or machines as intelligent as human beings. Taking computers a step further.

You seem smart.*

Stop it.*

Ok, but what exactly is it?

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. John McCarthy says it’s “The science and engineering of making intelligent machines, especially intelligent computer programs”. It is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software.

We have already made AI, but not nearly close to making a human level intelligence.

No that is because there are 2 types of AI. First is weak AI which is non-sentient computer intelligence, typically focused on a narrow task so the intelligence of weak AI is limited.

But it's still very smart? We just want to teach them reasoning, learning, and problem solving?

Yes but only in the specific thing it has been trained for, for example chess. Artificial general intelligence, which is the strong AI, is hypothetical artificial intelligence at least as smart as a human. Such an AI could improve itself so in successive intervals of increased intelligence, such an entity could theoretically achieve superintelligence in a relatively short period of time.

So that is the one we should be aware of?

One or more superintelligences could potentially change the world so profoundly and at such a high rate, that it may result in a technological singularity. But it's ok! Strong AI does not yet exist.

But it will at some point?

That is highly likely. The prospect of its creation inspires expectations of both promise and peril, and has become the subject of an intense ongoing ethical debate.

What are the goals for this risky invention that people really seem to want?

Well, the basic answer is: to create expert systems, the systems which exhibit intelligent behaviour, learn, demonstrate, explain, and advice people and to implement human intelligence in machines by creating systems that understand, think, learn, and behave like humans.

But I think it’s just to make more technology that can somehow ‘serve’ humans.

In a way yes! But maybe we won’t be able to control the AI, the control problem is a big concern for many researchers.

Like Nick Bostrom! I want to discuss him more, what is the control problem exactly?

It is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. The human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch. Potential strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

What kind of harmful plans?

You know, take over the planet, treat humans like slaves to reach some weird goal, that sort of thing. There are also many different approaches to AI, like soft computing.

And soft computing techniques resemble biological processes more closely than traditional techniques. More in the direction of human and machine merging.

Yes, I think that might be the way to go. In relation to humanity getting the most out of it.

AI Safety is also about eliminating existential risk from advanced artificial intelligence.

It is a hypothetical threat, people think that dramatic progress in artificial intelligence could somehow result in human extinction.

The argument for the existence of the threat is that the human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. As an example, just as the fate of some animal species depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

The severity of the risk scenarios is a big debate, it rests on a number of unresolved questions about future progress in computer science.

Yes, a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise, and controlling a superintelligent machine (or even instilling it with human-compatible calues) may be an even harder problem than naively supposed.

This guy Stuart Russell says in his book Artificial Intelligence: A Modern Approach, that an AI system's learning function "may cause it to evolve into a system with unintended behaviour" is the most serious existential risk from AI technology.

A lot of leading AI researchers have signed a letter that states just this, that we should work toward AI that will not make us extinct (basically).

But then there is also the singularity. In 1965 I. J. Good originated the concept now known as an "intelligence explosion”: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

So intelligence explosion and the technological singularity are the same thing, kind of.

The intelligence explosion is a possible outcome of humanity building artificial general intelligence. That kind of intelligence would be capable of recursive self-improvement leading to rapid emergence of artificial superintelligence, the limits of which are unknown!*

When will it happen?

Technology researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in AI will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification!

Maybe it will be possible and desirable to live forever as a machine!

*