|March 22 · Issue #33 · View online |
There are plenty of great reads in this issue and we hope you’ll enjoy as much as we did. As always, we appreciate you sharing this newsletter with your friends and colleagues.
See you next week!
| Y Combinator Introduces AI Track |
Y Combinator is introducing a vertical group in YC dedicated specifically to AI companies. According to the announcement, this move is predicated on the assumption that AI “might be the biggest technological leap since the Internet”. YC offers a set of perks to attract a flock of potential AI Black Swans
to the next badge of YC funded startups. Those perks include:
• Office hours with engineers experienced in ML to help overcome technical challenges.
• Extra cloud compute credits for GPU instances.
• Special talks by leaders in the field.
| GalaxyGAN: Recovering Features in Astrophysical Images |
Generative Adversarial Networks are used to recover features in astrophysical images of galaxies beyond the deconvolution limit. This is achieved by training the GANs on 4,550 images of galaxies which include artificially degraded features to increase the size of the training dataset.
| Distill: A Modern Medium for Presenting Research |
A very interesting initiative to relegate carelessly formatted pdf papers to the paper bin of history. If you have ever read one of Chris Olah’s excellent blog posts
on a particularly gnarly piece of neural network architecture you know the explanatory power of visualizations. This project allows researchers to use the full range of expressive media the web has given us to present their work without sacrificing its visibility or lower academic standing because it is not contained in a 20-page pdf on arxiv.org.
expressive web-based medium vs. traditional
| So Your Company Wants to Do AI? |
Seasoned advice from industry veteran Eder Santana to those thinking about heading off the AI division at their company. TLDR:
1. Don’t let data leave your engineers hanging: make it easy to access, load and use data, don’t make this a bottleneck for your engineers
2. Start with something you can visualize: use something like Tensorboard
to explore and get a feel for the data and your models.
3. Define your validation/hard cases dataset early on: make sure you are clear on how you measure performance, think deeply about validation
4. Premature scaling is the main reason of death of early stage startups: an old lesson given new relevance by the hardware demands of deep learning, only scale up infrastructure if absolutely necessary
| Transfer Learning - Machine Learning's Next Frontier |
An excellent and thorough introduction to transfer learning and its increasing significance for academia and industry.
| Squeezing Deep Learning Into Mobile Phones |
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
| Linear Algebra Cheat Sheet for Deep Learning |
Very handy and weel done cheat sheet for anyone in need of a linear algebra brush-up.
| Building Safe A.I. |
| Adversarial Autoencoders (with Pytorch) |
Great blog post giving an accessible introduction to the recently developed architecture of Adversarial Autoencoders.
| Google Research: An Upgrade to SyntaxNet, New Models and a Parsing Competition |
Google is releasing a major upgrade to SyntaxNet
, a neural-network framework for analyzing and understanding the grammatical structure of sentences. The upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.
| NakedTensor: Bare Bone Examples of Machine Learning in TensorFlow |
A dead simple introduction to TensorFlow and great starting point for anyone who likes to learn by example.
| DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker |
DeepStack is an algorithm to model imperfect information settings to focus computation on the relevant decision, it beat professional poker players in a study involving 44,000 hands of poker.
| Towards Diverse and Natural Image Descriptions via a Conditional GAN |
The authors introduce a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content.
| Mask R-CNN |
Impressive research coming out of facebook research; a conceptually simple, flexible, and general framework for object instance segmentation. Their approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance.
| Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization |
The authors present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time.
| Dance Dance Convolution |
If you are as bored of the standard Dance Dance Revolution step charts as we are, then use this paper to train a generative LSTM to learn to choreograph a new step chart given a raw audio track.