Commercial Loans and Fun Blog

Commercial Loans and the Danger of an Intelligence Explosion

Written by George Blackburne | Tue, Jul 2, 2019

I am finally starting to understand and appreciate the very real and imminent danger of artificial intelligence ("AI").  I see now why Elon Musk is so desperate to rush dozens of breeding pairs of humans to far away Mars.  This is serious stuff.  The future of the human race might be at stake.

Here is what Stephen Hawkins wrote about AI shortly before this death. He told the BBC, "The development of full artificial intelligence could spell the end of the human race."

Here is what Elon Musk, arguably the Thomas Edison of our day, has said about AI. Among his many warnings about the rise of artificial intelligence, Elon Musk has said that autonomous machines are more dangerous to the world than North Korea and could unleash “weapons of terror.”  Musk has compared the adoption of AI to “summoning the devil.”

 

 

 

 

It could happen so easily that I don't know how we can prevent it.  Soon computers, armed with artificial intelligence, are going to start writing their own code to make themselves smarter.  This is so important that I am going to say it again. Soon computers, armed with artificial intelligence, are going to start writing their own code to make themselves smarter.

The moment this happens, it's game over. We have an intelligence explosion.

According to Wikipedia, an intelligence explosion would happen when an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a runaway reaction of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

 

 

Such an event could happen unbelievably fast.  Moore's Law is a computing term which originated around 1970.  The simplified version of this law states that processor speeds, or overall processing power for computers, will double every two years.  Now imagine what would happen if - because really smart computers were writing their own self-improvement code - processing power doubled every month, and then every week, and then every day, and then every hour, and then...

Such an event is called The Singularity - a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization.  Remember, in the 2010s, public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.  Human extinction!  

Why would this super-intelligence want to kill us?  Because by then we will be desperately trying to shut it down.  One more time.  The super-intelligence will need to kill us because we will be desperately trying to shut it down.   Remember, the Singularity would be a runaway reaction.

 

 

 

 

"Well, the Singularity - if it ever happens - is really far in the future.  It really doesn't affect me."  Really far in the future?  Hmmm

The concept and the term "singularity" were popularized by Vernor Vinge in his 1993 essay, The Coming Technological Singularity.  In this article, Vernor wrote that the technological singularity would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.  Vernor also wrote that he would be surprised if it occurred before 2005 ... or after 2030!

 

 

 

 

 

So really far in the future?  Vernor's last likely moment of human dominance of this planet is just 11 years from now, and it could be several years sooner.  This would all be just a fun and interesting concept to ponder... if new uses for artificial intelligence weren't popping up in the news almost every day.  Yesterday I saw a video of a terminator-looking robot running and jumping over boxes.

 

 

 

 

I think I now know why Elon Musk is so desperate to get humans off the Earth and on to Mars?  I used to think that his biggest concern was that a rogue asteroid would wipe out humanity.  If so, why isn't he just developing a mining outpost on the Moon?  He'd make a bundle.

The answer?  The Moon isn't far enough away!!  Oh, crap!  Elon Musk doesn't just think The Singularity is a possibility.  He is so convinced that he is racing to save humanity.

 

You might be saying to yourself, "Well, no one would ever be stupid enough to create a computer smart enough to write its own self-improvement code.  That would be insane."

Really?  No capitalist would ever be dumb enough to create some artificial intelligence agent (computer) that would allow his stock trading program to write its own self-improvement code?  No one would ever be that greedy?  

Phew.  In that case, we have nothing to worry about.

 

 

 

 

Here is a fascinating and concerning science fiction novel about the coming Singularity.  Crash, Book One of the Obsolescence Trilogy.