Open Source Deep Learning Curriculum | Deep Learning Weekly

Bringing you everything new and exciting in the world of
 deep learning from academia to the grubby depth
 of industry every week right to your inbox. Free.

Open Source Deep Learning Curriculum

This open-source Deep Learning curriculum is meant to be a starting point for everyone interested in seriously studying the field. Plugging into the stream of research papers, tutorials and books about deep learning mid-stream it is easy to feel overwhelmed and without a clear idea of where to start. Recognizing that all knowledge is hierarchical, advanced concepts building on more fundamental ones, I strove to put a list of resources that form a logical progression from fundamental to advanced.

Few universities offer an education that is on par with what you can find online these days. The people pioneering the field from industry and academia so openly and competently share their knowledge that the best curriculum is an open source one.

Let's start with fundamental mathematical and machine learning principles:

Linear Algebra: This course is an absolute classic that will give you a sound mathematical foundation in Linear Algebra, while also covering important engineering application such as the Fast Fourier Transform or Eigenfaces and topics relevant to Machine Learning such Singular Value Decomposition. As a bonus, the instructor Gilbert Strang makes each lecture delightful to watch.

Linear Algebra

This is a basic subject on matrix theory and linear algebra. Emphasis is given to topics that will be useful in other disciplines, including systems of equations, vector spaces, determinants, eigenvalues, similarity, and positive definite matrices.

As programmers, we are in the fortunate position to take a computational approach to learning mathematics and particularly in the realm of machine learning, to use the tools of the trade to teach ourselves new concepts. Coding The Matrix takes exactly this approach, the course (originally published on Coursera) lets you explore Linear Algebra using Python while providing an abundance of examples of how Linear Algebra is applied in Computer Science.

Coding The Matrix

The course is driven by applications from areas chosen from among: computer vision, cryptography, game theory, graphics, information retrieval and web search, and machine learning.

Statistics: PennState has some of the best material on statistics you can find from introductory level all the way up to graduate level courses. There are no video lectures and you can conveniently read through the entire introductory course in a 2-3 hours as a refresher.

Probability Theory and Mathematical Statistics

As the title of Stat 414 suggests, we will be studying the theory of probability, probability, and more probability throughout the course.

Again, the second course I am going to recommend requires you to write code, specifically, programs that reason with uncertainty and make predictions. Along the way, you will become comfortable with Bayesian thought, probabilistic graphical models and learning probabilistic models.

Computational Probability and Inference

Learn fundamentals of probabilistic analysis and inference. Build computer programs that reason with uncertainty and make predictions. Tackle machine learning problems, from recommending movies to spam filtering to robot navigation.

Optimization: Being familiar with optimization techniques is absolutely essential for understanding machine learning algorithms. In his tutorial 'Optimization Algorithms in Machine Learning' Stephen Wright covers all the bases and pays particular attention to gradient methods and online optimization.

Optimization Algorithms in Machine Learning

Optimization provides a valuable framework for thinking about, formulating, and solving many problems in machine learning. Since specialized techniques for the quadratic programming problem arising in support vector classification were developed...

Neural networks inhabit a unique niche among machine learning algorithm since they are not convex, that is they are not guaranteed to have a unique global minimum. Non-the-less you should understand the benefits of convex optimization, not least because it helps you become sensitive to the particular problem of non-convexity that you are often faced with in deep learning.

Stanford Engineering Everywhere | EE364A - Convex Optimization I

Concentrates on recognizing and solving convex optimization problems that arise in engineering. Convex sets, functions, and optimization problems. Basics of convex analysis. Least-squares, linear and quadratic programs,...

Machine Learning Fundamentals: CalTech's Learning from Data gives a great introduction to machine learning theory. The lecture are very accessible and will give you a solid understanding of bias-variance trade off, the concept of a hypothesis space, the idea behind the VC-dimension and regularization. I tend to recommend Andrew Ngs course as well, but the sheer amount of algorithms introduced can overwhelm you while it also diverts attention from more foundational concepts. This course is available through CalTech's online program as well as the edX platform

Learning From Data (Introductory Machine Learning)

Introductory Machine Learning course covering theory, algorithms and applications. Our focus is on real understanding, not just "knowing."

In case you do not want to slog through a semester's worth of math courses or you are already familiar with the fundamentals and just need a brief refresher with a machine learning bent, MIT's Mathematics of Machine Learning course if for you. It covers statistical learning, optimization and online learning.

Mathematics of Machine Learning

Broadly speaking, Machine Learning refers to the automated identification of patterns in data. As such it has been a fertile ground for new statistical and algorithmic developments. The purpose of this course is to provide a mathematically...

Machine learning makes extensive use of information theory to derive the math behind its algorithmic methods, cross entropy, for instance, is used as a loss function. However, information theory also informs the concept of model complexity and overfitting; in machine learning, we want to use the model that 'minimizes description length', i.e. a model that accounts for the data in the most succinct way possible, this directly relates to the idea of data compression. David MacKay's classic book 'Information Theory, Inference and Learning Algorithms' will not only give you all you need to know about information theory but also present a different approach towards neural networks (chapter 5). It is available as PDF on his website.

David MacKay: Information Theory, Inference, and Learning Algorithms: Home

An instant classic, covering everything from Shannon's fundamental theorems to the postmodern theory of LDPC codes. You'll want two copies of this astonishing book, one for the office and one for the fireside at home.

Deep Learning: There is a plethora of courses on Deep Learning and with such a fast moving field not only are new ones coming out in quick succession, but older ones are starting to get a little dated. I will try to keep this list up to date and add to it as new materials become available. Geoffrey Hinton's iconic course on neural networks has recently been relaunched on Coursera, although the content has not been updated. However, Hinton is a talented lecturer and this is still a good place to start. He also covers a lot of examples and practical applications such as speech and object recognition, image segmentation and language modelling.

Neural Networks for Machine Learning - University of Toronto | Coursera

Neural Networks for Machine Learning from University of Toronto. Learn about artificial neural networks and how they're being used for machine ...

Udacity partnered with Google to put together a hands-on course on Deep Learning that has you build complete learning systems in TensorFlow.

Deep Learning

While you will get a good overview of Deep Learning topics and concept, this course if fairly shallow and is best supplemented with the comprehensive tour de force that is the Deep Learning Book

Deep Learning

For a more thorough stand-alone Deep Learning course Nando de Freitas has published his UBC lectures on youtube.

Nando de Freitas

I am a machine learning professor at UBC. I am making my lectures available to the world with the hope that this will give more folks out there the opportuni...

Berkeley's Stat212b: Topics Course on Deep Learning delves a little deeper into selected topics of Deep Learning, such as convolutional architectures, invariance learning, unsupervised learning and non-convex optimization.

Stat212b: Topics Course on Deep Learning by joanbruna

This topics course aims to present the mathematical, statistical and computational challenges of building stable representations for high-dimensional data, such as images, text and audio.

Stanford's CS231n is hands-down the best course I have seen on Deep Learning. Fei-Fei Li, Andrej Karpathy and Justin Johnson do a fantastic job of presenting the details of the deep learning architectures with a focus on learning end-to-end models. This course will get you up to speed on the state of the art of image classification and convolutional neural networks.

Stanford University CS231n

Convolutional Neural Networks for Visual Recognition

The Berkeley course Designing, Visualizing and Understanding Deep Neural Networks draws heavily on CS231n, but puts more emphasis on exploring the training and use of deep neural networks through visualization tools which helps develop a better intuition about them generally.

Designing, Visualizing and Understanding Deep Neural Networks

Deep Networks have revolutionized computer vision, speech recognition and language translation. They have growing impact in many areas of science and engineering. They also do not follow a closed set of theoretical principles.

Another favorite of mine, is Richard Socher's course CS224d: Deep Learning for Natural Language Processing which similar to CS231n puts a focus on developing, training and debugging fully fledged deep learning architectures. Higlights include his lecture on word embedding and the student project reports.

Stanford University CS224d

Deep Learning for Natural Language Processing

The course 'Differentiable Inference and Generative Models' from the university of Toronto tours recent innovations in inference methods such as recognition networks, black-box stochastic variational inference, and adversarial autoencoders. Some of the coolest new advances in Deep Learning have come out of generative adversarial networks and this course is the best place to learn about them.

CSC 2541 Fall 2016

Differentiable Inference and Generative Models

Papers: Finally, reading papers is the best way to reconstruct the history of a field and deep learning, in particular, has a rich and varied history of research into neural networks, model-free method and cognitive science that informs and illuminates the current burgeoning of the field. The Github user songrotek put together an excellent list that can be supplemented by the papers discussed as part of the university of Heidelberg's course on Deep Learning:

Institut für Computerlinguistik

The course deals with methods of "Deep Learning", also known as "Representation Learning" or "Feature Learning". Instead of manually creating feature representations of data, the central idea of ​​this approach is to automatically learn abstract and hierarchical representations.


Deep-Learning-Papers-Reading-Roadmap - Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!