Don't miss out on anything new and exciting in the world of
 deep learning. Sign up to get our weekly digest in your inbox.

Open Source Deep Learning Curriculum

This open-source Deep Learning curriculum is meant to be a starting point for everyone interested in seriously studying the field. Plugging into the stream of research papers, tutorials and books about deep learning mid-stream it is easy to feel overwhelmed and without a clear idea of where to start. Recognizing that all knowledge is hierarchical, advanced concepts building on more fundamental ones, I strove to put a list of resources that form a logical progression from fundamental to advanced.

Few universities offer an education that is on par with what you can find online these days. The people pioneering the field from industry and academia so openly and competently share their knowledge that the best curriculum is an open source one.

Let's start with fundamental mathematical and machine learning principles:

Linear Algebra: This course is an absolute classic that will give you a sound mathematical foundation in Linear Algebra, while also covering important engineering application such as the Fast Fourier Transform or Eigenfaces and topics relevant to Machine Learning such Singular Value Decomposition. As a bonus, the instructor Gilbert Strang makes each lecture delightful to watch.

As programmers, we are in the fortunate position to take a computational approach to learning mathematics and particularly in the realm of machine learning, to use the tools of the trade to teach ourselves new concepts. Coding The Matrix takes exactly this approach, the course (originally published on Coursera) lets you explore Linear Algebra using Python while providing an abundance of examples of how Linear Algebra is applied in Computer Science.

Statistics: PennState has some of the best material on statistics you can find from introductory level all the way up to graduate level courses. There are no video lectures and you can conveniently read through the entire introductory course in a 2-3 hours as a refresher.

Again, the second course I am going to recommend requires you to write code, specifically, programs that reason with uncertainty and make predictions. Along the way, you will become comfortable with Bayesian thought, probabilistic graphical models and learning probabilistic models.

Optimization: Being familiar with optimization techniques is absolutely essential for understanding machine learning algorithms. In his tutorial 'Optimization Algorithms in Machine Learning' Stephen Wright covers all the bases and pays particular attention to gradient methods and online optimization.

Neural networks inhabit a unique niche among machine learning algorithm since they are not convex, that is they are not guaranteed to have a unique global minimum. Non-the-less you should understand the benefits of convex optimization, not least because it helps you become sensitive to the particular problem of non-convexity that you are often faced with in deep learning.

Machine Learning Fundamentals: CalTech's Learning from Data gives a great introduction to machine learning theory. The lecture are very accessible and will give you a solid understanding of bias-variance trade off, the concept of a hypothesis space, the idea behind the VC-dimension and regularization. I tend to recommend Andrew Ngs course as well, but the sheer amount of algorithms introduced can overwhelm you while it also diverts attention from more foundational concepts. This course is available through CalTech's online program as well as the edX platform

In case you do not want to slog through a semester's worth of math courses or you are already familiar with the fundamentals and just need a brief refresher with a machine learning bent, MIT's Mathematics of Machine Learning course if for you. It covers statistical learning, optimization and online learning.

Machine learning makes extensive use of information theory to derive the math behind its algorithmic methods, cross entropy, for instance, is used as a loss function. However, information theory also informs the concept of model complexity and overfitting; in machine learning, we want to use the model that 'minimizes description length', i.e. a model that accounts for the data in the most succinct way possible, this directly relates to the idea of data compression. David MacKay's classic book 'Information Theory, Inference and Learning Algorithms' will not only give you all you need to know about information theory but also present a different approach towards neural networks (chapter 5). It is available as PDF on his website.

Deep Learning: There is a plethora of courses on Deep Learning and with such a fast moving field not only are new ones coming out in quick succession, but older ones are starting to get a little dated. I will try to keep this list up to date and add to it as new materials become available. Geoffrey Hinton's iconic course on neural networks has recently been relaunched on Coursera, although the content has not been updated. However, Hinton is a talented lecturer and this is still a good place to start. He also covers a lot of examples and practical applications such as speech and object recognition, image segmentation and language modelling.

Udacity partnered with Google to put together a hands-on course on Deep Learning that has you build complete learning systems in TensorFlow.

While you will get a good overview of Deep Learning topics and concept, this course if fairly shallow and is best supplemented with the comprehensive tour de force that is the Deep Learning Book

For a more thorough stand-alone Deep Learning course Nando de Freitas has published his UBC lectures on youtube.

Berkeley's Stat212b: Topics Course on Deep Learning delves a little deeper into selected topics of Deep Learning, such as convolutional architectures, invariance learning, unsupervised learning and non-convex optimization.

Stanford's CS231n is hands-down the best course I have seen on Deep Learning. Fei-Fei Li, Andrej Karpathy and Justin Johnson do a fantastic job of presenting the details of the deep learning architectures with a focus on learning end-to-end models. This course will get you up to speed on the state of the art of image classification and convolutional neural networks.

The Berkeley course Designing, Visualizing and Understanding Deep Neural Networks draws heavily on CS231n, but puts more emphasis on exploring the training and use of deep neural networks through visualization tools which helps develop a better intuition about them generally.

Another favorite of mine, is Richard Socher's course CS224d: Deep Learning for Natural Language Processing which similar to CS231n puts a focus on developing, training and debugging fully fledged deep learning architectures. Higlights include his lecture on word embedding and the student project reports.

The course 'Differentiable Inference and Generative Models' from the university of Toronto tours recent innovations in inference methods such as recognition networks, black-box stochastic variational inference, and adversarial autoencoders. Some of the coolest new advances in Deep Learning have come out of generative adversarial networks and this course is the best place to learn about them.

Papers: Finally, reading papers is the best way to reconstruct the history of a field and deep learning, in particular, has a rich and varied history of research into neural networks, model-free method and cognitive science that informs and illuminates the current burgeoning of the field. The Github user songrotek put together an excellent list that can be supplemented by the papers discussed as part of the university of Heidelberg's course on Deep Learning:

Further suggestions are welcome, simply send them to deeplearningweekly[at]gmail.com.