|April 3 · Issue #103 · View online |
If there’s anything you think we missed or want to see in next week’s issue, send us a note on Twitter: @dl_weekly
Until next week!
| Turing Award Won by 3 Pioneers in Artificial Intelligence - The New York Times |
For their work on neural networks, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio will share $1 million for what many consider the Nobel Prize of computing.
| Amazon's AWS Deep Learning Containers simplify AI app development | VentureBeat |
Amazon’s Deep Learning Containers support popular deep learning frameworks, including Google’s TensorFlow and Apache MXNet.
| Teaching machines to reason about what they see | MIT News |
MIT researchers show that merging statistical and symbolic artificial intelligence promises to enable computers to reason more like humans. Their hybrid model can learn object-related concepts like color and shape, and leverage that knowledge to interpret complex relationships.
| Tracking Readers’ Eye Movements Can Help Computers Learn | WIRED |
As we read, our eyes reveal what words go together, and which are the most important. Researchers are applying that data to help neural networks understand language.
| Inmates in Finland are training AI as part of prison labor - The Verge |
Inmates at two prisons in Finland are doing a new type of prison labor: classifying data to train artificial intelligence algorithms for a startup. The startup, Vainu, sees the partnership as a kind of prison reform that teaches valuable skills, but other experts say it plays into the exploitative economics of prisoners being required to work for very low wages.
| Zoom in... enhance: a Deep Learning based magnifying glass - part 2 |
Data augmentation and loss functions for improving super resolution results.
| March Madness — Analyze video to detect players, teams, and who attempted the basket |
Combining deep learning with traditional computer vision techniques to track basketball statistics and events.
| The Illustrated Word2vec – Jay Alammar |
Fantastic visual explanations of Word2Vec embeddings.
| A Practical Guide To Hyperparameter Optimization. |
Training deep learning models can be tough. They don’t work without the right hyperparameters. Here’s how you can use algorithms to automate the process.
| GitHub - wuhuikai/FastFCN: FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation. |
FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
| GitHub - deep-learning-notes/seminars/2019-03-Neural-Ordinary-Differential-Equations |
| [1807.06653] Invariant Information Clustering for Unsupervised Image Classification and Segmentation |
“We present a novel clustering objective that learns a neural network classifier from scratch, given only unlabelled data samples. The model discovers clusters that accurately match semantic classes, achieving state-of-the-art results in eight unsupervised clustering benchmarks spanning image classification and segmentation. These include STL10, an unsupervised variant of ImageNet, and CIFAR10, where we significantly beat the accuracy of our closest competitors by 8 and 9.5 absolute percentage points respectively. The method is not specialised to computer vision and operates on any paired dataset samples[.]”
| [1903.06048] MSG-GAN: Multi-Scale Gradient GAN for Stable Image Synthesis |
“While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to use, in part due to instability during training. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator can quickly become uninformative, due to a learning imbalance during training. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for generating synchronized multi-scale images.”