Miracles and wonders: deep dreams, label machines, and my new super-super-super computer

2015 was a turning point year for machine learning.  Check it out:

Based on Andrej Karpathy’s “NeuralTalk2” code, as modified by Kyle McDonald: a Brooklyn-based artist who “works in code”.  Neuraltalk uses GPUs – another breakthrough for 2015, where my (and probably your!) laptop took a big leap in performance.

Here’s the data: according to NVIDIA, the Quadro 1000M processor I use (a fairly old one) pulls down 268.8 Gflops.

That’s 134 times faster than my CPU speed, of about 2 gigaflops.  Because of what NVIDIA’s doing with its CUDA library, as of this year I’ve got an incredibly powerful machine here in my office, and I didn’t have to spend a penny (so far).

Here’s me, with the super-super-super-computer. 🙂

By way of comparison,  the fastest Cray supercomputer, the Y-MP, clocked in at 333 megaflops for each of its eight processors equaling about 2.7 gigaflops.  I’m 50 times faster.  Another data point: the worlds best supercomputer in 1993 was the 1024 core Thinking Machines CM-5/1024 installed at Los Alamos. Its peak speed was 59.7 gigaflops: still a fraction of my “new” NVIDIA-powered machine.

https://upload.wikimedia.org/wikipedia/commons/5/5a/Cray2.jpeg

Thanks, @NVIDIA, for the 134x ML speed increase, now 50x the fastest Cray. Ever. Click To Tweet

As if that wasn’t enough, cluster computing from companies like @AWS and @H2O is exploding my fast and cheap access to machines that aren’t on my desk.   I routinely run millions of rows of data through multiple CPUs.

So between the new algorithms (what Andrew Ng calls “Rocket Engines”), substantial communities supporting platforms that run them,  and this compute capacity (“Rocket Fuel”), organizations that wish to invest even a little in this technology are seeing big leaps in performance for important business problems.

Deep learning, for instance, with a similar high-level architecture to the human visual processing system, has reached breakthrough performance levels in 2015.  It even “hallucinates”:

This was the year, in particular, that computer vision started consistently beating human performance, specifically on the ImageNet dataset:

The top-5 error rate in the ImageNet Large Scale Visual Recognition Challenge has been rapidly reducing since the introduction of deep neural networks in 2012. Source: NVIDIA.

This google trends report captures what’s happening, especially in the last few months, during which”machine learning” overtook “artificial intelligence” by this measure of interest.  Meanwhile, sub-areas like “deep learning” and “IBM Watson” grew steadily throughout the year.

Extrapolating the graphs above, it’s going to be an interesting 2016.  For me: to keep offering state-of-the-art solutions to solve the hardest problems.  And my challenge to you is to learn enough about machine learning so that you can develop some ideas about how this technology can help you.  This is a good time to get started.

You may also like...