AI predicts the future by watching videos
An artificial intelligence is learning how to anticipate what is going to happen in the next few seconds.
When you see a person sitting down at a table with a plate of food it stands to reason that the next thing they are going to do is begin eating. For a human this is a no-brainer, but for a computer the ability to predict what is going to happen next represents a major technological challenge.
In a new bid to get around this problem, researchers at the Massachusetts Institute of Technology have developed a sophisticated deep learning algorithm that can look at a still image and generate a short movie based on what it anticipates is going to happen next.
Show it an image of the shoreline for example and it will produce a short video of moving waves.
To learn how to make such predictions, the AI was trained using more than two million Internet videos of everything from train stations and golf courses to shopping malls and hospitals.
"Any robot that operates in our world needs to have some basic ability to predict the future," said Carl Vondrick, one of the researchers who worked on the system. "For example, if you're about to sit down, you don't want a robot to pull the chair out from underneath you."
While it's still early days yet, teaching a computer how to anticipate what is going to happen next is an important step towards building an AI that can operate effectively in the real world.
"The laws of physics and the nature of objects mean that not just anything can happen," said John Daugman at the University of Cambridge Computer Laboratory.
"The authors have demonstrated that those constraints can be learned.