|July 19 · Issue #90 · View online |
Howdy folks and welcome to another week in deep learning!
Happy reading and hacking!
| An Overview of National AI Strategies – Politics + AI |
The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, 12 countries have released strategies to promote the use and development of AI.
| Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras |
Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.
| Apple’s New AI Chief Takes on Oversight of Siri |
Apple Inc.’s new head of artificial intelligence will also oversee the Siri digital assistant, taking on that responsibility from software executive Craig Federighi, according to the company’s website.
| Facebook AI Research Expands With New Academic Collaborations |
Facebook AI Research is opening new offices in Seattle, Pittsburg, London and Menlo Park where FAIR researchers as split their time between FAIR and a university.
| Troubling Trends in Machine Learning Scholarship |
A pertinent critique of recent trends within machine learning scholarships, the author makes the following salient criticims:
- Failure to distinguish between explanation and speculation.
- Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.
- Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.
- Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.
Deep Learning Weekly has secured a £200 discount off registration, quote DLW200!
| Design Patterns for Production NLP Systems |
A great overview of design patterns for NLP systems covering
- online vs. offline systems
- interactive vs. non-interactive systems
- unimodal vs. multimodal systems
- end-to-end systems vs. piecewise systems
- closed domain vs. open domain systems
- monolingual vs. multilingual systems
| Feature-wise Transformations |
Many real-world problems require integrating multiple sources of information. Sometimes these problems involve multiple, distinct modalities of information — vision, language, audio, etc. This context-based processing is referred to as conditioning: the computation carried out by a model is conditioned or modulated by information extracted from an auxiliary input. Finding an effective way to condition on or fuse sources of information is an open research problem, and this article explains a specific family of approaches called feature-wise transformations.
| Using Deep Learning to Automatically Rank Millions of Hotel Images |
An interesting post, looking at how idealo.de built a deep learning model to automatically assess image quality by implementing an aesthetic and technical image quality classifier based on Google’s research paper “NIMA: Neural Image Assessment”.
| Switchable-Normalization |
GitHub is where people build software. More than 28 million people use GitHub to discover, fork, and contribute to over 85 million projects.
| A Project Based Introduction to TensorFlow.js – Knowledge-Exploration Systems |
A great beginner TensorFlow tutorial demonstrating how it was used in the simple project Neural Titanic. This project visualizes the evolution of the predictions of a single layer neural network as it is being trained on the tabular Titanic Dataset for the task of binary classification of passenger survival.
| Google AI Blog: Improving Connectomics by an Order of Magnitude |
The field of connectomics aims to comprehensively map the structure of the neuronal networks that are found in the nervous system, in order to better understand how the brain works. This process requires imaging brain tissue in 3D at nanometer resolution (typically using electron microscopy), and then analyzing the resulting image data to trace the brain’s neurites and identify individual synaptic connections. Due to the high resolution of the imaging, even a cubic millimeter of brain tissue can generate over 1,000 terabytes of data! In collaboration with researchers from the Max Planck Institute of Neurobiology Google AI designed an RRN that improves the accuracy of automated interpretation of connectomics data by an order of magnitude over previous deep learning techniques.
| Glow: Better Reversible Generative Models |
OpenAI introduces Glow, a reversible generative model which uses invertible 1x1 convolutions. The model can generate realistic high resolution images, supports efficient sampling, and discovers features that can be used to manipulate attributes of data. The code will be released and there is a fun online visualization tool.
| Proceedings of Machine Learning Research |
Proceedings of the 35th International Conference on Machine Learning Held in Stockholmsm
| An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution |
For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper the authors show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. Their solution to this puzzling phenomenon is show to improve GAN, R-CNN and RL architectures.