The artificial intelligence boom is here. Here's how it could change the world around us.

Impact

A future with highways full of self-driving cars or robot friends that can actually hold a decent conversation may not be far away. That's because we’re living in the middle of an "artificial intelligence boom" — a time when machines are becoming more and more like the human brain. 

That's partly because of an emerging subcategory of AI called "deep learning."

It's a process that's often trying to mimic the human brain's neocortex, which helps humans with language processing, sensory perception and other functions. 

Essentially, deep learning is when machines figure out how to recognize objects. It's often used to help a self-driving car see a nearby pedestrian, or to let Facebook know that there's a human face in a photo. And despite only catching on in recent years, researchers have already applied deep learning in ways that could change our world. 

Here are some examples of what it can do now:

Understanding the Earth's trees

Justin Sullivan/Getty Images

NASA's Earth Exchange used a deep learning algorithm on satellite images to figure out the amount of land covered by trees across the United States. That information could improve scientists' understanding of American ecosystems, plus how to most accurately model climate change's effects on our planet.

NASA essentially uses deep learning to fix a specific problem. As more and more data becomes available, it also becomes harder (and more time-consuming) for scientists to interpret it. Files used in the project could be several petabytes in size. To put that in perspective, one petabyte is equivalent to 1,000 terabytes, and a one-terabyte hard drive could hold about 500 hours of movies or 17,000 hours of music.

"We do large-scale science. We're talking about big data," said Sangram Ganguly, a senior research scientist at NASA Ames Research Center and the BAER Institute. "As the data is increasing, there's a need to merge some [conventional] physics-based models with machine learning models." 

Translating the world 

The Google Translate App's ability to translate text from photos comes out of deep learning techniques. The app is able to recognize individual letters or characters in an image, recognize the word those letters make, then look up the translation. 

This process is more complicated than it seems — often, words appear in the real world under messier conditions than simple fonts on a computer. They're frequently "marred by reflections, dirt, smudges and all kinds of weirdness," as Google put it on their research blog. Therefore, their software had to also learn "dirty" depictions of text, a process that requires a robust database of photos to reference.

Teaching robots how to understand human life

Sanja Fidler, an assistant professor at the University of Toronto, says that she's working on "cognitive agents" that "perceive the world like humans do" and "communicate with other humans in the environment." But instead of focusing on the humanoid hardware of a robot, she interested in building the robot's perception of our world.

In the future, if robots and humans ever coexist, robots "need to learn about objects and simple actions," Fidler said. "You want the robot to understand how people are actually feeling and what they could be thinking."

Fidler executes deep learning techniques on "robots" by taking data from pop culture. She trains the software-based mind of that "robot" with famous texts like the Harry Potter book series, then introduces clips of movies and entire movie scripts based on those books. According to Fidler, an average movie has about 200,000 frames of information, while the average book has about 150,000 words. Though this kind of research is still in its early stages, combining that information helps robots learn new concepts and process the real world. 

Warner Brothers/University of Toronto

To prove her case, Fidler showed a movie clip that robots automatically matched to text from the Harry Potter series. By picking up on words like "candy shop" and "white linen sheets," the robot recognized a scene where Harry comes to in a hospital and discovers jars of treats. In other words, the robot is able to understand the human world enough to match closely tied visuals to words, thanks to the wonders of deep learning. 

More is likely to come.