Using technology and biological enhancement to get closer to greater-than-current human intelligence

Let's discuss some future trajectories of the human species. I've been thinking a lot about this, I just started worrying about the future.

Jaha! Why?

Some experts believe that a superintelligent AI will be created in 20-100 years and that it might be malevolent toward humans (or there is no way of knowing)

If there is no way of knowing, it could also be kind! Treat us as it's creators, parents in a way.

My point is, if we manage to create something so 'smart and powerful' we won't be able to foresee what it might do. Have you heard the paperclip example?
If you give an artificial intelligence an explicit goal — like maximizing the number of paper clips in the world — and that artificial intelligence has gotten smart enough to the point where it is capable of inventing its own super-technologies and building its own manufacturing plants, it could be dangerous. Because how could an AI make sure that there would be as many paper clips as possible? Bostrom says,
One thing it would do is make sure that humans didn’t switch it off, because then there would be fewer paper clips. So it might get rid of humans right away, because they could pose a threat. Also, you would want as many resources as possible, because they could be used to make paper clips. Like, for example, the atoms in human bodies.*

But I don’t know. To me, the kind of paper clip doom described by Bostrom doesn’t seem very superintelligent at all. It seems kind of dumb.
If we succeed in creating machines that actually become smarter than us — so much smarter that they redesign themselves to become even more brilliant,*
setting off what Bostrom calls an “intelligence explosion” that leaves puny humans far behind — wouldn’t they be smart enough to understand what we meant, instead of taking us literally at our word? I just think, why must doom be inevitable? Why couldn’t our superintelligent spawn be able to grasp the lessons inherent in the existing corpus of human knowledge and help us prosper?

Even if we tried to constrain the AIs with goals that seem perfectly safe, like making humans smile, or be happy.
What if the AI decided to achieve this goal by taking control of the world around us, and then paralysing human facial muscles in the shape of a smile?*
Or decided that the best way to maximise human happiness was to stick electrodes in our pleasure centers and get rid of all the parts of our brain that are not useful for experiencing pleasure.
“And then you end up filling the universe with these vats of brain tissue, in a maximally pleasurable state,”*

Then what do you suggest we do?

Nobody knows! Not even Nick Bostrom, who has written a whole book about it and spends his days philosophising about the subject. The frustrating thing is that despite our best guesses, we have no idea what is going to happen, or when! It just freaks me out sometimes.

We can just stop trying to develop it. Make us superintelligent instead!

We only need one person in the entire universe to succeed, once it's made there is no turning back, you know. You cannot uninvent an invention.*

I would much rather try to make myself better than to have a machine do everything for me!

Bostrom writes that for us humans, weak forms of superintelligence could be achievable by means of biotechnological enhancements.*
And if we are smarter then it becomes more likely that we can make machine intelligence. Because even if we were fundamentally unable to create machine intelligence, it might still within reach of cognitively enhanced humans; you know, a generation of more smart humans, more genetically enhanced inventors and scientists, will continune working on this and might succeed.

So you think this is inexorable?*

Impossible to stop or prevent!

It could still be interesting and beneficial to enhance the functioning of biological brains so we could be super smart! Like implanting Google into our brains!*
I heard about the idea of being able to upload our minds into some kind of a computer, and then live forever as a machine, so you know, when our bodies start breaking down because of old age, or disease, we could just move into virtual reality and live forever that way! That would also save space and resources on the planet.

I heard about that solution to the population density problem would be to engineer humans in a way, to make us smaller! So we would need less space and resources to survive!

Sounds cool. But even better would be to disappear into virtual reality!

Bostrom talks about selective breeding as an example to enhance humans.

Well that is kind of sick, think of all the political and moral hurdles related to that! Never gonna happen.

Just take it as a thought experiment. If we selected for intelligence, just IQ.
Selected babies using biotechnology. Then every generation we would be adding up to 10 IQ points, on average. And within something like 10 generations we could have added ca 100 IQ points on average.*

Doing this would be super controversial, reminds me of The Handmaid’s Tale. Where women had become infertile so the men took all power away, and the fertile women were forced to become ‘handmaids’ whose social function is to bear children for the ruling class. Well it’s not so much about choosing embryos, but still!! Who gets to have kids, who is forced to have them. Scary.

In theory, once this technology would mature, an embryo could be designed with the exact preferred combination of genetic inputs from each parent. Genes that are present in neither of the parents could also be spliced in, some that could have significant positive effects on cognition. So keeping the genes that make the babies smart! Throwing out the other ones.

You mean proofreading genomes?

Yeah kind of. Nick Bostrom says in his book, “If one wished to speak provocatively, one could say that individuals created from such proofread genomes might be “more human” than anybody currently alive, in that they would be less distorted expressions of human form. Such people would not all be carbon copies, because humans vary genetically in ways other than by carrying different deleterious mutations.
But the phenotypical manifestation of a proofread genome may be an exceptional physical and mental constitution, with elevated functioning in polygenic trait dimensions like intelligence, health, hardiness, and appearance.”*

So what he means by that is, if we could make technology that could proofread genomes, and create a version that has none of the mutations, a person created in that way would be “MORE HUMAN” because they would be a more perfect person.
An original person without our mutations. Like an original idea/form from Plato’s Theory of Forms.*

Yes. Creepy. But cloning could also be a route towards “greater-than-current-human intelligence”.
If we just clone exceptionally talented individuals!*

There are some that would never do this but then again, I don't know, people are crazy.

Once the example has been set, and the results start to show people in doubt will follow suit.
Nations would lose out in economic, scientific and military contests with competitors that embrace the new human enhancement technologies.*
Individuals within a society would see places at elite schools being
filled with genetically selected children (who may also on average be prettier, healthier, and more conscientious) and will want their own offspring to have the same advantages!*
If the technology starts working, people will change their attitude pretty fast. If it benefits us, we will do it!

If we were able to fully development the genetic technologies it might be possible to ensure that new individuals on average smarter than any human who has yet existed,*
with peaks that rise higher still. The potential of biological enhancement is thus ultimately high, probably sufficient for the paths to superintelligence attainment of at least weak forms of superintelligence.

Just imagine! How the rate of progress in the field of artificial intelligence would change in a world where an average person is an intellectual peer of Alan Turing or John von Neumann, and where millions of people tower far above any intellectual giant of the past.

That would be insane. My kids are going to be so smart.

There are also easier ways to do this. Easier and less controversial. Biomedical enhancements could give bigger boosts. Drugs already exist that are alleged to improve memory, concentration, and mental energy in at least some subject.

I actually think I would prefer drugs to some kind of gene picking. But that would not be as effective.

But what do you think about enhancement, as in something like brain-computer interfaces? That humans could start transcending, shredding the old humanness.

You mean getting rid of the skinbag? Our wetwear, all this blood and guts? I really think that would be lovely.

Start with implants into the brain. Maybe uploading later. Here, have a little quote. “It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect brain – computer interfaces recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain."

Augmented brains? As in extending the information processing capabilities of the human mind!

Well we are already augmented humans because we have got the cell phones! They are extensions of us. In our pockets at all times, we’ve got Wikipedia and all of the internet to help us. So it’s like this really strong memory but just a slow connection to it.

Yes, with these ‘huge’ slow phones.

Right, that’s the problem!

So what if we could take the cell phone and plug it directly into our brain?

And that is a super relevant question today! Because Elon Musk, you know who Elon Musk is right? Well he has started a new company, called Neauralink, working on exactly that. Neuralink is developing ultra high bandwidth brain-machine interfaces to connect humans and computers. Elon says that bad AI is quite possible, and we are far away from solving the AI safety problem. If we were smarter, then maybe we could solve it. We might be able to get smart enought to stay ‘ahead’ long enough!

I like him. He has a sense of urgency, toward reaching his goal.*

A 'Wait But Why' article talks about the 'history' of the brain all the way through to what Neuralink is up to today. On the Neuralink website they are hiring:

Good that he wants people who build things that work!

Yes, cause it’s potentially very dangerous, trying to put some kind of software into our brains. Into our sensitive meat bodies. I thought of something else in relation to this Neuralink thing, because it’s about merging man an machine, Nick Bostrom thinks that a ‘whole brain emulation’ could also be a path to superintelligence.

Do you mean uploading?

Yeah In a way. That would entail scanning and modelling the computational structure of a biological brain! We need to figure out a way for a machine to dissect the tissue of a brain into thin slices, which would then be scanned to get an image of different structural and chemical properties. Then that raw data would be fed into another machine to reconstruct a 3D neuronal network like the one in the scanned brain and the artificial brain implemented on a powerful computer.

That would be super cool.

And imagine this! If those 3 steps would be successful, we would get a digital reproduction of the original intellect, with memory and personality intact.

What! Then the emulated human mind would exist as software on a computer?

Yes, the mind could either inhabit a virtual reality or interface with the external world by using robotic appendages. So exist in VR or live as a robot!

That is crazy! Just imagine if that would happen. A person or more acurately, their brain, would be copied and pasted. To be run as software on a very powerful computer.

Yes and humans being augmented, increases the likelyhood of us dealing with the AI safety problem. Cause we will be smarter and then maybe able to keep up with a superintelligent machine.