Deep Learning Weekly Issue #151
6.5 million coronavirus Tweets, OpenAI's commercial text generation, TF Lite on MCUs, CVPR 2020, and more
|Matthew Moellman||Jun 17, 2020|
This week in deep learning we bring you what 6.5 million #coronavirus tweets reveal about people’s thoughts during the pandemic (with data), this new machine learning framework for deep learning and traditional algorithms, and OpenAI's commercial text generation product.
You may also enjoy some technical neural network tutorials like this one about weight pruning or this one about EfficientNet.
For some computer vision content, check out the best papers at CVPR 2020, the results of Facebook's deepfake detection contest, and this paper about unsupervised image-to-image translation (with code).
As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.
Until next week!
Machine learning company DataRobot Inc. today made another acquisition, buying Boston Consulting Group’s artificial intelligence technology platform.
Facebook contest reveals deepfake detection is still an "unsolved problem"
Facebook has announced the results of its first Deepfake Detection Challenge, an open competition to find algorithms that can spot AI-manipulated videos.
The two-year fight to stop Amazon from selling face recognition to the police
This week’s moves from Amazon, Microsoft, and IBM mark a major milestone for researchers and civil rights advocates in a long and ongoing fight over face recognition in law enforcement.
OpenAI’s Text Generator Is Going Commercial
The research institute was created to steer AI away from harmful uses. Now it’s competing with tech giants to sell a cloud-computing service to businesses.
CVPR 2020 Underway, Best Papers Announced
The 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) has announced its best paper awards.
Mobile + Edge
Microsoft Corp. today introduced a new edge video analytics product for its Azure cloud platform, as well as improved authentication features to help organizations better address the shift to remote work.
Running and Testing TF Lite on Microcontrollers without hardware in Renode
In this article, the author will show you the basics of how to use Renode to run TensorFlow Lite on a virtual RISC-V MCU, without the need for physical hardware.
Critical Capabilities For Edge Computing In Industrial IoT Scenarios
This article describes the building blocks of an industrial edge computing platform.
This project is using fitness trackers and AI to monitor workers' lockdown stress
PwC is harnessing AI and fitness-tracking wearables to gain a deeper understanding of how work and external stressors are impacting employees' state of mind.
An introduction to weight pruning.
What 6.5 million of #coronavirus tweets and Deep Topological Analysis reveal about people’s thoughts during the pandemic
DataRefiner applied Topological Data Analysis and Deep Learning to a large volume of textual data to reveal hidden patterns in discussions.
EfficientNet: Scaling of Convolutional Neural Networks done right
How to intelligently scale a CNN for achieving accuracy gains.
6.5M tweets in English under #coronavirus captured 8 March - 24 April 2020.
Libraries & Code
Machine learning framework for both deep learning and traditional algorithms.
Code for the paper "VirTex: Learning Visual Representations from Textual Annotations."
Rethinking the Truly Unsupervised Image-to-Image Translation - Official PyTorch Implementation.
Papers & Publications
Abstract: The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pre-training to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pre-training, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.
Rethinking the Truly Unsupervised Image-to-Image Translation
Abstract: Every recent image-to-image translation model uses either image-level (i.e. input-output pairs) or set-level (i.e. domain labels) supervision at minimum. However, even the set-level supervision can be a serious bottleneck for data collection in practice. In this paper, we tackle image-to-image translation in a fully unsupervised setting, i.e., neither paired images nor domain labels. To this end, we propose the truly unsupervised image-to-image translation method (TUNIT) that simultaneously learns to separate image domains via an information-theoretic approach and generate corresponding images using the estimated domain labels. Experimental results on various datasets show that the proposed method successfully separates domains and translates images across those domains. In addition, our model outperforms existing set-level supervised methods under a semi-supervised setting, where a subset of domain labels is provided. The source code is available at this https URL.