DeepMind Advances Artificial Intelligence with Memory-Solving Algorithm

DeepMind has solved a limitation affecting its machine learning—the software’s shoddy memory—in an academic breakthrough.

The Alphabet-owned artificial intelligence company published a paper this week in academic journal Proceedings of the National Academy of Sciences that potentially paves the way for superior AI multitasking.

“We have shown it is possible to train a neural network sequentially, which was previously thought to be a fundamental limitation,” James Kirkpatrick, a DeepMind researcher who was the lead author on the paper, said in an interview with Bloomberg.

A neural network, while considered the best machine learning for many tasks, has long suffered from forgetfulness, overwriting its own data to complete each new task.

The solution is an algorithm called Elastic Weight Consolidation, according to the report, which enables a neural network to retain prior knowledge as it learns a new task.

In a test, AI software equipped with EWC was able to learn ten different games and perform well on all. However, the AI could not master any one game as well as a single-task neural network, at least in the time it was given.

US Tech Titans are Transforming This Canadian City into an AI Innovation Hub