I recently did a 20 minute talk on Deep Learning at a developer group in Johannesburg. Since most of my slides were visual in nature I decided to turn the talk into a blog post rather than sharing the slides.
Hopefully I can use the blog post to show you that if you ignore all the hype, that you can do some truly amazing things with Deep Learning.
To start, I’m going to try and tickle your fancy by showing you a few examples of what Deep Learning has achieved.
Lee Sedol beaten by AlphaGo in the ancient Chinese board game Go. For humanity it was painful to watch, for AI it was a huge leap forward.
Deep Learning used in self driving cars. It is estimated that Tesla vehicles has done over 2 billion kilometres in autopilot mode.
Deep Learning used for image annotation.
Real time translation — crossing the language barrier.
The Chatbot Conference
The Chatbot Conference On September 12, Chatbot’s Life, will host our first Chatbot Conference in San Francisco. The…
What is Deep Learning
According to François Chollet:
“deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data.
In other words, if you use the diagram below as a guide, deep learning is the ability to map a class of samples (space X) to a an output category (class Y) by using geometric transformation.
“Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.”
Deep Learning has the ability to disrupt, and to disrupt fast. Even though not nearly at the level of human intelligence there are tools and algorithms mature enough to add value in almost any industry.
Deep Learning as a Neural Network
We can also think of Deep Learning as a Neural Network with more than one hidden layer. A Neural Network with more than one hidden layer is also said to be deep.
More intuitive explanation
Let’s imagine Deep Learning as a black box, and we are standing on top of the mountain where our error is high, trying to get to the bottom where the error is lowest and the black box is good. The process of going down the mountain is the learning part. The idea is to go down as quickly and smoothly as possible.
We do this by repetitively passing training samples through the black box and calculate the output error against the desired output. With every iteration working our way back through the network we calculate the slope of each neuron, and adjust the weights in an optimised way to reduce the error (go down the mountain quickly and smoothly). If we continuously do this, we are learning.
The technique is called, backpropagation, and has had a big impact on the success of Deep Learning.
The era of MOOCs
One factor that has opened up Deep Learning to everyone is the era of MOOCs.
Through MOOCs you can learn the latest in cutting edge technology at university, including Deep Learning.
Collectively we have been resisting evil corporations controlling software for a long time through the open source movement, and MOOCs is the result of a collective resistance to the control of education by universities.
PS. I’m using the word evil in the nicest possible way.
The Stanford Experiment
An experiment ran by the Stanford university played a significant role in the launch of the MOOCs era.
Stanford is picky, less that 5% of applicants to Stanford are successful, it is only for the elite.
In 2012 Stanford ran an experiment, they decided to teach 3 computer science courses online, open to all, for free. Video lectures were posted online, forums provided a discussion and collaboration platform and tests were automated.
160000 students enrolled, 75% of those outside of the US from a 190 countries. A 100 volunteers translated the class material into 44 languages.
That is 160000 students getting a Stanford education with nothing more than an internet connection and a thirst for knowledge.
Where things gets interesting is when you look at the class grading after the course. The first Stanford student in the class came in at number 161, beaten by 160 students from mostly poor, isolated developing countries. Students that never before would have been able to attend a prestigious university like Stanford.
Largely thanks to the experiment by Stanford we now have quality MOOCs in a wide variety of science & tech, including Deep Learning.
The contributions made by top experts
A second factor contributing to the democratisation of Deep Learning is the contribution the top experts are making.
Fei-Fei is the creator of ImageNet and the ImageNet competition that has contributed in making Deep Learning popular.
Fei-Fei is active in the community and Today she is the Chief Scientist AI/ML at Google Cloud.
Here Andrej Karpathy is seen giving a lecture on Convolutional Neural Networks in 2016. He is a regular blogger and often shares insights through Twitter.
He has recently been appointed as the Director of Autopilot Vision at Tesla.
Yann LeCun using the Quora platform for a Deep Learning discussion. The Quora platform is a great place to ask and answer Deep Learning technical questions however often these discussions end up being philosophical.
Yann LeCun is the Director of AI Research at Facebook and Professor at NYU.
Andrew Ng giving a Coursera ML lecture — he is chief scientist at Baidu while still involved with Stanford and Coursera.