|June 17 · Issue #44 · View online |
Welcome to another week in deep learning,
As always if you enjoy receiving this newsletter you can help us out by sharing it with friends, family, and colleagues. :)
| An Exclusive Look at How AI and Machine Learning Work at Apple |
Three years earlier, Apple had been the first major tech company to integrate a smart assistant into its operating system. Siri was the company’s adaptation of a standalone app it had purchased…
| SoftBank Agrees to Buy Robot Maker Boston Dynamics From Google Parent Alphabet |
Alphabet has been trying to sell Boston Dynamic for quite some time because it doesn’t want to wait ten years for a marketable product. Softbank, on the other hand, has deep pockets and is playing the long game
| Discussion: An Adversarial Review of “Adversarial Generation of Natural Language” |
This post by Yoav Goldberg kicked off an interesting discussion last week about the merits and problems arising from researcher’s ability to circumvent the peer-review process on arxiv.org. One argument positing that the rapid publishing cycle incentivizes overselling of results and flag-planting in a particular area of research. The counter-argument advanced by Yann LeCun
points out that the fast arxiv.org posting is more efficient resembling the ‘bazaar’ vs 'cathedral’
contrast in software engineering, the traditional publishing route corresponding to the 'cathedral’ and the arxiv.org route adhering to the release early, release often credo of the Linux model.
| Why AI Works – Artificial Understanding |
Monica Anderson lays out the epistemological underpinnings of classical AI vs deep learning. The important distinction being that the former is ‘model based’ while the latter is 'model free’. Before the advance of deep learning, programmers had to explicitly model the problem and express this logic based model in a computer program, a neural network on the other hand 'understands’ the problem and creates an intuition based model based on the training data.
| Google Research: MobileNets: Open-Source Models for Efficient On-Device Vision |
Google releases MobileNets, a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize accuracy while being mindful of the restricted resources for an on-device or embedded application.
| Cheatsheets AI: Essential Cheat Sheets for deep learning and machine learning researchers |
Essential Cheat Sheets for deep learning and machine learning researchers
| Counting Objects with Faster R-CNN |
An insightful post describing different approaches, common problems, challenges and latest solutions in the Neural Networks object counting field. As a proof of concept, an existing model for Faster R-CNN network is used to count objects on the street using video examples.
| RE•WORK Machine Intelligence Summit Amsterdam |
The Machine Intelligence Summit is where machine learning meets artificial intelligence. Join industry experts, leading academics, and innovative startups to explore how intelligent machines make sense of data in the real world and their impacts on our lives. Receive 20% off with the discount code DLWEEKLY.
| RE•WORK Machine Intelligence in Autonomous Vehicles Summit Amsterdam |
The global Autonomous Vehicles market and is expected to reach $65.3 billion by 2027. The Machine Intelligence in Autonomous Vehicles Summit explores how recent technical advancements in machine learning and deep learning create smarter, safer and more efficient transport. Receive 20% off with the discount code DLWEEKLY.
| ml4a/ml4a-ofx · GitHub |
ml4a-ofx - A collection of openFrameworks apps for working with machine learning.
| Pytorch implementation of "A simple neural network module for relational reasoning" |
Pytorch implementation of “A simple neural network module for relational reasoning” (Relational Networks)
| Code for MobileNets: Open-Source Models for Efficient On-Device Vision |
MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases.
| Sobolev Training for Neural Networks |
Sobolev Training for neural networks exploits target derivatives by optimizing neural networks to not only approximate the function’s outputs but also the function’s derivatives thereby encoding additional information about the target function within the parameters of the neural network.,
| Variational Approaches for Auto-Encoding Generative Adversarial Networks |
A new paper by DeepMind: ‘Auto-encoding generative adversarial networks (GANs) combine the standard GANalgorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. ’
| Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour – Facebook Research |
This paper demonstrates that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization.