Research


WEEKLY REVIEW


Trends


Top hashtags

Top influencers

Miles_Brundage
MSFTResearch
RichardSocher
deliprao
alexjc
antgoldbloom
edersantana
egrefen
hardmaru

Top sources

medium.com
microsoft.com
aclweb.org
eng.uber.com
fortune.com
forums.fast.ai
tabnine.com
deeplearningindaba.com
fhi.ox.ac.uk

Get updates live! Follow us on

Choose your newsletters

News

Understanding implicit regularization in deep learning by analyzing trajectories of gradient descent

On Jul 12, 2019
@weballergy shared
RT @prfsanjeevarora: Remember matrix completion? Deep linear nets solve it better than the old nuclear norm algorithm. Analysis requires going beyond traditional optimization view and understanding #trajectories. Blog post by Nadav and Wei: https://t.co/TBX6Z9J6tq . Paper https://t.co/vDvToYXQfz
Open

Algorithms off the convex path.

www.offconvex.org
On Jul 12, 2019
@weballergy shared
RT @prfsanjeevarora: Remember matrix completion? Deep linear nets solve it better than the old nuclear norm algorithm. Analysis requires going beyond traditional optimization view and understanding #trajectories. Blog post by Nadav and Wei: https://t.co/TBX6Z9J6tq . Paper https://t.co/vDvToYXQfz
Open


On Jul 13, 2019
@weballergy shared
RT @chrisdonahuey: Excited to announce our recent work on generating 8-bit music using Transformer! Our LakhNES model uses transfer learning: we pre-train on the heterogeneous Lakh MIDI dataset and fine-tune on NES music 🎶https://t.co/1XrBgzJFz3 📜https://t.co/idRGXU5Vef ⭐️https://t.co/vgHkYkGNEH https://t.co/Bi1C6kwq7G
Open

LakhNES: Generate 8-bit music with machine learning

Generate 8-bit chiptunes with deep learning. Contribute to chrisdonahue/LakhNES development by creating an account on GitHub.

On Jul 14, 2019
@stanfordnlp shared
RT @Tim_Dettmers: My new work with @LukeZettlemoyer on accelerated training of sparse networks from random weights to dense performance levels — no retraining required! Paper: https://t.co/V6I2XtauqG Blog post: https://t.co/sVX5p6SAWL Code: https://t.co/E0iwCHJcGX https://t.co/2UDdhhWZhG
Open

Sparse Networks from Scratch: Faster Training without Losing Performance

This blog post explains the sparse momentum algorithm and how it enables the fast training of sparse networks to dense performance levels — sparse learning.

On Jul 16, 2019
@antgoldbloom shared
RT @bhutanisanyam1: KaggleNoobs will be hosting an AMA with @fchollet on 28thJuly, 2019 during 9am - 10:30am PT Slack invite: https://t.co/BVPAIBoVy7 or send all questions to me, I'll ask them during the AMA. Till then, @fchollet was kind enough to share many great advices: https://t.co/SmkDODXlFY
Open
On Jul 14, 2019
@stanfordnlp shared
RT @Tim_Dettmers: My new work with @LukeZettlemoyer on accelerated training of sparse networks from random weights to dense performance levels — no retraining required! Paper: https://t.co/V6I2XtauqG Blog post: https://t.co/sVX5p6SAWL Code: https://t.co/E0iwCHJcGX https://t.co/2UDdhhWZhG
Open

Sparse Learning Library and Sparse Momentum Resources

Sparse learning library and sparse momentum resources. - TimDettmers/sparse_learning

On Jul 16, 2019
@xamat shared
Really interesting #NLP framework for dialogue systems by @UberEng https://t.co/WzjFbmHkgj
Open

Introducing the Plato Research Dialogue System: A Flexible Conversational AI Platform

The Plato Research Dialogue System enables experts and non-experts alike to quickly build, train, and deploy conversational AI agents.

On Jul 15, 2019
@NVIDIADC shared
RT @NVIDIAAIDev: Much of the work done on #MLPerf is integrated into our framework containers available from #NGC. Get these software optimizations and models today: https://t.co/QpCaO0eWM0 https://t.co/CVXNygr44m
Open

NVIDIA Boosts AI Performance in MLPerf v0.6

In just seven months since MLPerf debuted with version v0.5, NVIDIA has advanced per-epoch performance on MLPerf v0.6 workloads by as much as 5.1x overall.

On Jul 17, 2019
@deliprao shared
I just discovered this 2018 paper and love it! And yes, baselines need more love everywhere. #NLProc https://t.co/ln5bZXyThU https://t.co/zwla9wOukb
Open

Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms

Based upon this understanding, we propose two additional pooling strategies over learned word embeddings: (i) a max-pooling operation for improved interpretability; and (ii) a hierarchical ...

On Jul 12, 2019
@kastnerkyle shared
RT @roadrunning01: LakhNES: Improving multi-instrumental music generation with cross-domain pre-training pdf: https://t.co/zEpldDkXy4 abs: https://t.co/vDOJJDKEon github: https://t.co/bJswt5CRP1 sound examples: https://t.co/husnxB7CWt https://t.co/GSgTN4pJTV
Open

Click here to read the article

LakhNES: Improving multi-instrumental music generation with cross-domain pre-training Chris Donahue1 Huanru Henry Mao2 Yiting Ethan Li2 Garrison W. Cottrell2 Julian McAuley2 1 Department of ...

On Jul 12, 2019
@clmt shared
RT @NvidiaAI: At #VBTransform, @WalmartLabs Chief Data Officer, @BillGroves13, explained how his team is working with #NVIDIA to speed up data processing and machine learning using @rapidsai during a fireside chat with @datametrician. Learn more: https://t.co/fBFCoRJyjf https://t.co/afTBvSqsqr
Open

Walmart, NVIDIA Discuss How They’re Working Together to Transform Retail

Walmart’s the biggest retailer on Earth. It’s also one of the most competitive technology companies around. Two of its sharpest tech tools: NVIDIA GPUs and RAPIDS data science software. ...

Browse topics

Get updates live! Follow us on

Choose your newsletters

On Jul 11, 2019
@goodfellow_ian shared
I’m in Fortune’s 40 under 40: https://t.co/a4cZrJVjvJ
Open

Ian Goodfellow

Industry: A.I. As one of the youngest and most respected A.I. researchers in the world, Ian Goodfellow has kept busy pushing the frontiers of deep learning. Having studied under some of the ...

On Jul 15, 2019
@fchollet shared
RT @PyImageSearch: New tutorial!🚀 Learn how to perform Video Classification with #Keras and #DeepLearning 📽️📺 Full tutorial, including #Python code w/ pre-trained model, can be found here: https://t.co/idY0mqotxq 👍 #MachineLearning #ComputerVision #ArtificialIntelligence #AI #DataScience https://t.co/yfSoHZRn2v
Open

Video classification with Keras and Deep Learning

In this tutorial, you will learn how to perform video classification using Keras, Python, and Deep Learning.

On Jul 16, 2019
@Miles_Brundage shared
RT @jjding99: The @FHIOxford is recruiting researchers (various levels of seniority) for up to 11 positions! Areas include: Technical AI safety, Governance of AI, Biosecurity, Transparency/surveillance, & nanotech. More info: https://t.co/Qz8G7OBYQh benefits include the best snack room ever
Open

Join our rapidly growing research teams

The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.

On Jul 13, 2019
@stanfordnlp shared
“Although opting for industry over academia remains a popular choice, many if not most significant AI achievements have their roots in years of academic work.” Are Commercial Labs Stealing Academia’s AI Thunder? https://t.co/62DbU0lDiP
Open

Are Commercial Labs Stealing Academia’s AI Thunder?

Commercial research labs run by Google Research, DeepMind, and OpenAI are taking central stage in the artificial intelligence era. The…

On Jul 12, 2019
@thinkmariya shared
RT @TeachTheMachine: Why Machine Learning Does Not Have to Be So Hard https://t.co/iB2WJgtnpT
Open

Why Machine Learning Does Not Have to Be So Hard

Technical topics like mathematics, physics, and even computer science are taught using a bottom-up approach. This approach involves laying out the topics in an area of study in a logical ...

On Jul 17, 2019
@RichardSocher shared
@deanabspeaks @Benioff A lot of different efforts. One of them is the Deep Learning Indaba next month: https://t.co/awcUdL3Yw3
Open

Speakers at the 2019 Indaba

Bios for speakers at the Deep Learning Indaba 2019

On Jul 16, 2019
@peteskomoroch shared
RT @l2k: Ajay Uppili Arasanipalai @iyajainfinity wrote a nice post about using CNNs with @fastdotai and @weights_biases to do character recognition on Japanese characters https://t.co/iOpILLmSZx
Open

How to Teach Your Computer Japanese

Learn how to train an image classifier to 97% accuracy on the KMNIST dataset using modern best practices.

On Jul 14, 2019
@MSFTResearch shared
Achieving policy optimization is not truly adversarial but rather predictable from past info. @GTrobotics' Ching-An Cheng explains how leveraging known info to design better algorithms can speed up imitation learning and RL in this Microsoft Research Talk: https://t.co/f3AukWPlTb
Open

Policy Optimization as Predictable Online Learning Problems: Imitation Learning and Beyond

Efficient policy optimization is fundamental to solving real-world reinforcement learning problems, where agent-environment interactions can be costly. In this talk, I will discuss my ...

On Jul 11, 2019
@hmason shared
...and we're thrilled to be sharing new research from Cloudera @FastForwardLabs on applying deep learning for image analysis and transfer learning for natural language processing. Virtual event on July 24th! https://t.co/HJxw9alWaK
Open

Hear about the latest research from Cloudera Fast Forward Labs

Join the Fast Forward Labs team to hear about the latest advancements including tips you can use to get a headstart as a hands-on applied Machine Learning practitioner.