Facebook this summer announced a breakthrough in neural network training. Its researchers were able to fully train an image processing AI in one hour, using 256 GPUs. Not to be outdone a group of university researchers did it in 32 minutes with 1600 Skylake processors. And a week later a team in Japan did it in 15 minutes. This is why we call it an AI race.
The process of training a neural network is exactly how you’re picturing it. As long as you’re imagining it like this:
Basically you jam as much data as you possibly can, as quickly as you can, into a computer so that it’ll have some basic understanding of something.
ImageNet is a way for computers to associate images with words, it allows computers to ‘look’ at an image and tell us what it sees. This is useful if you want to create an AI capable of finding images that contain “blue shirts” or “dad smiling” when asked.
The benchmark for this sort of thing, currently, is a 50 layer neural network called Resnet50. It takes about two weeks to train a deep learning…
Latest posts by Peter Bordes (see all)
More from Around the Web