Some Techies Think 'Terminator' Could Be a Reality By 2045, But They're Hopelessly Wrong

Impact

It is amazing that while we are in the midst of ongoing economic, political, and environmental crises, some people are still very worried about the singularity.     

For the uninitiated, the technological singularity is a predicted event where technology becomes so advanced that it will not only overtake humanity (think The Terminator and The Matrix) but will be able to transform matter and change life on earth as we know it. The idea was popularized after the M.I.T. researcher and inventor Ray Kurzweil published The Singularity Is Near in 2005.   

As the argument goes, artificial intelligence (AI) is becoming more intelligent, and by around 2045 will be so advanced that it will surpass human intelligence. The machines will then reproduce themselves and become exponentially smarter in an intelligence "explosion" and eventually become trans-matter god-like entities.   

So why won’t this actually happen? The problem with many proponents of the singularity is their use of the term "intelligence." As Discover summarizes, "'Intelligence' is a nebulous catch-all like 'betterness' that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove.”

There are two very different ways to think about intelligence: One way is think of it as "processing power" in analyzing data and executing tasks — this is essentially how  proponents use it. The other way might be called "subjectivity," or the ability to change your thinking, assumptions, beliefs, and actions accordingly.

To singularitarians, the fact that technology will have massive processing power is enough for the rise of AI when it is not. To be able to take over, technology must have subjectivity to act completely by itself, making sure no pesky programmers override the new robot overlords.    

Interestingly this distinction was actually pointed out almost four centuries ago by Descartes (albeit controversially since he included animals as examples of machines):

"...Although machines can perform certain things as well as or perhaps better than any of us can do ... they did not act from knowledge, but only from the disposition of their organs. For while reason [in humans] is a universal instrument which can serve for all contingencies, [machines’] organs have need of some special adaption for every particular action."

That is, even though technology may well exceed human intelligence, it is always at the whims of the human programmer who creates its actions by code. To use Descartes’s example of a clock, even if we imagine a super-intelligent clock that can input millions of terabytes of data, it is nonetheless coded to always be a clock.

The singularitarians have two different solutions for this: First, that self-programming machines will be created, since they can change their programming, they will have subjectivity. The problem is that technology that can re-program itself (and still be able to function) has not even been theoretically solved, let alone been developed enough to be rolled out by 2045.

Their second solution is to digitally create a human brain from the ground up. As Tom Hartsfield points out, even with a billion transistors, we still cannot replicate a worm brain (which has about 0.0000000003 percent the neurons of a human brain). Even if we could replicate a worm brain given our current technology, by Kurzweil’s own law of accelerating returns, it would take millions of years before we have enough computing power to create a human brain — not a few decades.

The lesson we should learn is that for the foreseeable future, we can relax about the singularity. Until AI can overcome the problem of subjectivity, we do not need to worry about than robot uprisings.