SAN FRANCISCO – “We don’t die because the laws of physics require us to die – we die because we’re not currently smart enough not to die ... Why settle for predicting human behavior when we can re-engineer the human genome? ... An exponential function is a multiplicative derivative ... Solving the problem of friendly artificial intelligence is the key to saving the world.”
Humanity’s inevitable future, according to the speakers at the 7th Annual Singularity Summit held this past weekend in San Francisco, is one in which everyone lives forever, poverty and violence are relics of the past, and the intelligence of man and machine fuse into one to usher in a new “human-machine civilization.”
Several hundred futurists gathered for the summit, founded in 2006 by inventor Ray Kurzweil, Paypal billionaire Peter Thiel, and the Singularity Institute as a forum to promote discussion of the implications of the technological singularity – a hypothetical event when superhuman artificial intelligence (AI) comes into being and forever changes the human experience.
"We have a very good chance of making it through"
Kurzweil took the stage on Saturday afternoon to deliver the summit’s keynote address. “The singularity is near,” he began quietly, a grin slowly spreading across his face. “No, it isn’t here yet, but it’s getting nearer,” he said to laughs and applause. He spoke extemporaneously for over an hour, his presentation a mix of statistics, time series graphs, personal anecdotes, and predictions.
Computing ability and technological innovation have been increasing exponentially over the past few decades, he argued, alongside similar increases in life expectancy and income. “All progress stems from the law of accelerating returns,” he proclaimed. He discussed his latest project – an attempt to reverse-engineer the human brain. “Intelligence is at the root of our greatest innovations: genetics, nanotechnology, and robotics. Once we master artificial intelligence, unimaginable new frontiers will open up.”
After his talk, a man stood up and looked Kurzweil in the eye. “I’m in my 60s like you,” he said, his voice faltering. “Do you think we’ll make it?” It took me a few seconds to realize they were talking about immortality and I felt chills in that moment. “Life expectancy tables are based on what happened in the past,” responded Kurzweil without skipping a beat. “In 25 years, we’ll be able to add one year of life for every year that passes. We have a very good chance of making it through.”
The gender ratio among attendees was heavily skewed, with the most common demographic being white males, aged 20-40, with some technical background. There was also a smaller contingent of “tech groupies” – bloggers, journalists, life coaches, and the like.
Friendly AI is the key to saving the world
Other speakers at the summit addressed various issues that fell broadly under the rubric of preparing for the singularity. Temple Grandin, the autistic professor of animal science popularized in Oliver Sacks’ book An Anthropologist on Mars, talked about the need to “tap into local resources to turn kids onto science from a young age.” Characteristic of Silicon Valley’s pervasive libertarian bent, Grandin spoke of her disillusionment from government as a tool for change. “We need private industry to stimulate innovation,” she argued. “Silicon Valley needs to reach out to quirky, talented kids in Kansas who are going nowhere.”
Luke Muehlhauser, executive director of the Singularity Institute (SI), admitted that the likelihood of artificial general intelligence being conducive to the goals of humanity was low, and argued that SI’s goal was to increase the probability of a positive singularity as opposed to a negative singularity. “The key to saving the world is to develop new math to harness the power of friendly artificial intelligence.”
Harvard professor and psychologist Steven Pinker spoke about his new book, The Better Angels of Our Nature: Why Violence Has Declined, presenting his controversial thesis that the modern era is the least violent in human history. He offered archaeological evidence about the prevalence of violent deaths in prehistoric societies, data on decreasing homicide in Europe over the past millennium, and the absence of wars between major world powers since World War II.
Pinker posited that culture and modernity were the reason for the decrease in violence and concluded with a surprisingly moralistic message: “We should be grateful to the institutions of civilization and the enlightenment." Curiously, Pinker made no mention of “singularity” in his presentation. (In fact, as of 2008, he is on record as stating, “there is not the slightest reason to believe in a coming singularity” and that the singularity will “never, ever” occur.)
Kurzweil, who spoke immediately after Pinker, presented Pinker’s findings as an additional data point for his own argument that health, wealth, and happiness have all been exponentially increasing as per the law of accelerating returns and will continue to do so after the singularity.
I found this to be the most striking irregularity in Kurzweil’s argument. If the singularity is by definition a discrete, punctuated, event that drastically alters the course of human history, how do we know that the increases in health, wealth, and happiness will continue in the same direction on the other side? Doesn’t a position of epistemic uncertainty – and the very meaning of singularity – require that we say, “I don’t know” when asked what life will be like on the other side?
I caught up with Kurzweil at the afterparty on Saturday evening, and asked, “Will the trends continue to increase?” He grinned mischievously, pausing for a moment before responding, “They will.” “Is that a prediction?” I pressed. “Yes,” he responded.
An aura of exuberance
There was a striking aura of exuberance among the attendees at the Singularity Summit – a certainty that earth-shattering changes were on the horizon, that the future would be radically different from the present, almost unimaginably so, and that it would be better.
Muelhauser, in his talk, mentioned Arthur C. Clarke’s quote, “Any sufficiently advanced technology is indistinguishable from magic.” Only he suggested substituting intelligence for technology since the former was the necessary prerequisite for the latter. "Intelligence is like magic except it's real," he argued. "If you have enough intelligence, you can figure out how to give yourself all the other powers you might want, including more intelligence."
Kurzweil noted that people often think the world is getting worse and they blame technology before proceeding to refute both assumptions. “This is the quiet before the storm for 3D printing,” he exclaimed. “The world of physical things will soon be an information technology. We’ll have greater abundance and less fighting over resources.”
After the singularity, Kurzweil predicts, there won’t be a divide with humans on one side and machines on the other. “Humans and AI will be fused together. Our destiny is to be a human-machine civilization.”
At times, I wondered whether the exuberance bordered on the irrational. For many attendees, the singularity seemed to promise a future free of inequality, where free markets and unprecedented technological innovation would lead to an incredible quality of life for everyone on the planet. Laura Deming, 18, a Thiel fellow interested in anti-aging research, was asked whether the rich stood to gain disproportional benefit from life-extending technologies and whether this might only perpetuate existing inequalities. Seeming genuinely confused, Deming responded, “Malevolent rich people are not inherent in the system.”
Often, during the summit, I thought of the other meaning of singularity – the hypothesized point at the center of a black hole from which no light can escape to the outside world, that magical concept that had utterly enthralled me as a child when my dad first told me about it.
When confronted with something unknowable, we often end up projecting our deepest hopes and desires upon that unknown. Is it the same with the technological singularity and the movement it has inspired? Is it nothing but a foil for the greatest hopes and aspirations of the techies, an attempt to create meaning out of nothingness, a hope that not only does it get better, but that it promises to be one helluva an exciting ride?
I am reminded of something Kurzweil said: “We have a very good chance of making it through.” Making it through – to the other side of the singularity, to immortality, to happiness – to infinity, and beyond?
“I would be surprised if it doesn’t happen by 2030,” says Vernor Vinge, the mathematician and science fiction writer often credited with popularizing the concept of the singularity. Yet if the singularity is truly inevitable and impending – and is set to drastically alter our way of life – why are most of us unaware of (or impervious to) its implications?
“That the singularity will happen is a fact, independent of what anyone does or thinks,” responds Vinge. “It’s like watching an avalanche rolling down a hill. There’s a good chance it’ll be a mellow experience. We just want people to think about the consequences.”
Hamdan Azhar is a data scientist at a tech startup in Silicon Valley. He writes about public policy and the intersection of technology and society.