|May 14 · Issue #84 · View online |
As always, we hope you’ll enjoy reading as much as we did and would appreciate you sharing this newsletter with friends and colleagues.
See you later this week!
| Google Introduces Life like AI Experience With Google Duplex |
Probably the most discussed topic of last weeks Google I/O 2018, the announcement of Google Duplex through a pre-recorded demo has caused quite an uproar throughout the community. Albeit being technically impressive, the assistants attempt to appear as a human, was quite heavily criticized
. Google has already disclosed
, that the final version will disclose it’s nature, but won’t launch in all states due to certain laws. For more demos and details see Googles blog post
| DeepMind has trained an AI to understand how your brain thinks |
Fascinating article on research from DeepMind on spatial awareness of agents. By replicating the human brain, researchers were able to train a neural network to emit activity similar to grid cells. These cells are assumedly responsible for spatial awareness and may allow future networks to get more efficient at navigation tasks.
| Google announces a new generation for its TPU machine learning hardware |
Along with all the Google I/O announcements was a look at the latest TPU iteration, 3.0, which seems to be based on the hardware, but has up to 8x increased performance. Clocking in at 100 petaflops, the hardware now requires liquid cooling, which at the same time allows a denser deployment in data centers.
| Lobe’s ridiculously simple machine learning platform aims to empower non-technical creators |
Last week, lobe.ai
launched their new machine learning platform. Looking incredibly simple and filled with great examples, the service looks like an interesting way to easily create and use models in apps and services.
| Join the fastest growing Deep Learning Developer Community (sponsored) |
Deep Learning Studio is Free, Open and no-coding platform. Developers, Researchers, & Students love this platform. Try it out (It’s Free).
| Facebook’s Field Guide to Machine Learning video series |
Developed by the Facebook ads machine learning team, this six part video series shares best real-world practices and provides practical tips about how to apply machine-learning capabilities to real-world problems.
| Using Word2Vec for Better Embeddings of Categorical Features |
This article explains how to use Word2Vec instead of embedding layers to achieve more meaningful and general results by using an existing method and applying it to something new.
| 30+ Best Practices |
To intermediate/expert level deep-learning researchers, this course will appear like a 101 course which has more to do with breadth rather than depth. But, for university students like me, who are not new to Deep Learnin…
| Hyper-parameters in Action! Introducing DeepReplay |
This article introduces DeepReplay, a package aiming at visualizing the training process of a neural network. By plotting the feature space, losses, weight distribution histograms and more over time, you’ll be able to gain new insights into your training process. Looks really interesting!
| Pretrained models for TensorFlow.js |
The team behind TensorFlow.js has started collecting a model zoo. Not very extensive yet, but might come in handy in the future.
| Announcing Open Images V4 and the ECCV 2018 Open Images Challenge |
Google has updated it’s already extensive OpenImages dataset with more than 15 million bounding boxes for 600 categories, made by professional annotators. Very interesting challenge and valuable research.
| How Robust are Deep Neural Networks? |
In this paper, the authors evaluate the robustness of three recurrent neural networks to tiny perturbations, on three widely used datasets, to argue that high accuracy does not always mean a stable and a robust (to bounded perturbations, adversarial attacks, etc.) system.