|May 18 · Issue #41 · View online |
So enjoy your read, start coding or both!
We also want to thank everyone who shared and promoted our issue last week, your support is very much appreciated! 💪
| NVIDIA Accelerates AI, Launches Volta, DGX Workstation, Robot Simulator & More |
Nvidia showed off their latest hardware at this years GPU Technology Conference and especially focused on machine learning and high-performance computing operations. They announced the V100 ‘Volta’
, a P100 successor specialized for server deployment and machine learning tasks, a refreshed DGX-1
and the DGX Station
, a high-end workstation consisting of four V100 GPUs in a desktop compatible case as well as an upcoming “GPU Cloud”. Take a look for all announcements, but store away your credit card!
| Build and train machine learning models on our new Google Cloud TPUs |
Google announced their second generation Tensor Processing Unit
at this year’s Google I/O conference and unlike the first model, the new generation will be available to us ‘ordinary mortals’ as well. Google will begin offering the TPUs in their cloud solutions and will even provide a cloud of 1000 devices to researches in the newly announced TensorFlow Research Cloud
. This all lines up quite well with our predictions for Google’s AI strategy
and we’re curious about the impact of such a device on the DL framework landscape.
| What we've learned so far... |
Following recent trouble
about the handling of patient data, DeepMind has published a page covering the cooperation with the UKs National Health Service, what they learned and where they want to improve their work or partnership.
| Applying Artificial Intelligence in Medicine: Our Early Results |
A medical startup showcases the results they were able to achieve by applying deep learning to heart rate monitoring using an Apple Watch.
| Simpsons Detector |
A fun to read article by Zach Moshe on how to detect the four main Simpsons characters in images. He explains his use of transfer learning, the creation of simulated training data and even crowdsourcing to measure human performance in great detail and finishes up with his takeaways, as well as a nice package of TensorFlow and Keras code.
| Second Place Solution for the 2017 National Datascience Bowl |
Julian de Wit describes his journey to scoring the second prize in the Data Science Bowl 2017. Trying to detect cancerous lesions in lungs, he employed a 3D convolutional neural net using the Keras library and TensorFlow on Windows. Includes code and some interesting insights into cancer detection.
| Google’s TensorFlow Lite brings machine learning to Android devices |
At the annual I/O conference, Google announced a new version of TensorFlow optimized for mobile called TensorFlow Lite, which will consist of a specialized TensorFlow version that uses a new native machine learning API coming to Android this year. The approach is similar to Facebooks Caffe2Go
, but Google may be able to integrate such an API much deeper into the system, just as Apple did with their CPU
focused APIs. There is definitely a trend towards on-device-computations and we’re excited what we’ll see next.
| Roboschool |
OpenAI expanded their robotic AI efforts and enhanced their Gym
with Roboschool, an open-source software for robot simulation. This allows you to simulate robots in different environments in order to train or test your models.
| Caffe2 adds 16 bit floating point training support on the NVIDIA Volta platform |
Following Nvidia’s Volta announcement
, Caffe2 announced support for the dramatically increased 16 bit floating point capabilities of these GPUs. This allows workign with reduced precision and therefore faster computations while maintaing the same level of accuracy.
| Picasso: A free open-source visualizer for CNNs – merantix |
A new open-source library offering insights to your models by integrating partial occlusion and saliency map visualizations. These allow you to understand what image features activate your neurons and brings at least some transparency to the learning process. Adding new visualizations is easy as well, so you may adapt it to fit your needs.
| Medical Image Net |
Stanford teased a new dataset consisting of at least half a petabyte of medical images. Considering the success and importance of ImageNet, this may lead to some serious results and we’re very excited!
| Using Machine Learning to Explore Neural Network Architecture |
At their I/O opening keynote, Google announced new progress in a field they call “AutoML”, where a neural net generates new network architectures, trains and evaluates them in order to further optimize the performance. Sounds like some sort of future AI dystopia, but seems to be working astoundingly well.
| Network Dissection |
Network Dissection is a framework for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts.