|June 14 · Issue #87 · View online |
As we just crossed 8000 subscribers, we would like to thank you once again for all of your support. As always, if you want to help us grow this great community of deep learning enthusiasts, simply share this issue with friends and colleagues.
See you next week!
| AI at Google: our principles |
Following the quite hefty internal discussion around ‘Drone AI work’ for the military, Google has decided to publicly write down its principles for AI products.
| Microsoft Azure will soon offer machines with up to 12 TB of memory |
Still haven’t trained on that huge set of cat pictures you collected last year, because your data pipeline is slow and you don’t want to do distributed training? Microsoft has got you covered.
| When the bubble bursts… |
A thoughtful article on the current state of AI, asking if we’re in an AI bubble and if so, when it’s gonna pop. Whether or not, the author includes some helpful tips, how you might want to adjust to handle a burst comfortably.
| Why the Future of Machine Learning is Tiny |
This article looks at today’s chip market and makes some assumptions on the role of machine learning on embedded devices and why the market will become more and more important in the machine learning industry.
| Why This Startup Created A Deep Learning Chip For Autonomous Vehicles |
An Israeli artificial intelligence startup gets $12.5 million in funding for a deep learning processor they plan to apply to autonomous vehicles.
Mention DLWEEKLY for $400 USD discount on any Lambda Quad!
| Building the Software 2.0 Stack by Andrej Karpathy from Tesla |
An enlightening talk by Andrej Karpathy on the modern AI-based software stack, challenges in data labeling and what infrastructure is necessary to use the 2.0 stack. Contains lots of interesting internals from Teslas vision team.
| One-shot object detection |
Matthijs Hollemans decided to dive very deep into One-Shot object detection models and created an incredibly extensive and detailed blog post, explaining how and why they work. Great read!
| Improving Deep Learning Performance with AutoAugment |
AutoAugment is an automatic way to design custom data augmentation policies for computer vision datasets, e.g., guiding the selection of basic image transformation operations, such as flipping an image horizontally/vertically, rotating an image, changing the color of an image, etc. Looks pretty interesting.
| Training a Text Classifier with Create ML and the Natural Language Framework |
Apple will start shipping a pretrained models embedded in iOS 12 and announced a ‘Create ML’ tool that allows fine tuning small models on top of them in order to perform different tasks. This article takes a look at the new tool and how well it performs.
| LSTM · ml5js |
Built on top of TensorFlowJS, ml5js aims to make machine learning accessible to a broad audience of artists, creative coders, and students. In this little example, you’ll learn how to build a model that ends your sentence just as Hemingway would have done it.
| Improving Language Understanding with Unsupervised Learning |
OpenAI combined two ideas, transformers and unsupervised pretraining, and managed to achieve state-of-the-art results on a set of diverse language tasks.
| Why do deep convolutional networks generalize so poorly to small image transformations? |
Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. This paper shows that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels and that this failure of generalization also happens with other realistic small image transformations.