Neural network, Plinko, Das Model, The Vectors, Input/output, Reinforcement learning

Click here to read the article

On Jan 11, 2021
@nathanbenaich shared
RT @simonbatzner: We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt @Materials_Intel @bkoz37 #compchem👇🧵 1/N https://t.co/5njHPLCcyD https://t.co/mnUbxqYgCc
Open

SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials Simon Batzner,1 Tess E. Smidt,2 Lixin Sun,1 Jonathan P. Mailoa,3 Mordechai Kornbluth,3 Nicola Molinari,1 and Boris Kozinsky1, 3 1Harvard University 2Lawrence Berkeley National Laboratory 3Robert ...

arxiv.org
On Jan 11, 2021
@nathanbenaich shared
RT @simonbatzner: We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt @Materials_Intel @bkoz37 #compchem👇🧵 1/N https://t.co/5njHPLCcyD https://t.co/mnUbxqYgCc
Open

Click here to read the article

Click here to read the article

SE(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials Simon Batzner,1 Tess E. Smidt,2 Lixin Sun,1 Jonathan P. Mailoa,3 Mordechai Kornbluth,3 Nicola ...

Click here to read the article

Click here to read the article

In this work, we additionally include the SAAO den- sity matrix, P, the orbital centroid distance matrix, D, the core Hamiltonian matrix, H, and the overlap matrix, S. B. Approximated ...

TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion

In summary, key contributions of this paper are: • proposing TuckER, a new linear model for link prediction on knowledge graphs, that is simple, expressive and achieves state-of-the- art ...

Artificial Intelligence and Deep Learning For the Extremely Confused

Artificial Intelligence and Deep Learning For the Extremely Confused

The watershed moment in Deep Learning is typically cited as 2012’s AlexNet, by Alex Krizhevsky and Geoffrey Hinton, a state of the art GPU accelerated Deep Learning network that won that ...

How Transformers work in deep learning and NLP: an intuitive introduction

How Transformers work in deep learning and NLP: an intuitive introduction

An intuitive understanding on Transformers and how they are used in Machine Translation. After analyzing all subcomponents one by one such as self-attention and positional encodings , we ...

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

The experiments show that the use of RI- FLE significantly improves deep transfer learn- ing accuracy on a wide range of datasets, out- performing known tricks for the similar pur- pose, ...

ECCV 2020: Some Highlights

ECCV 2020: Some Highlights

The 2020 European Conference on Computer Vision took place online, from 23 to 28 August, and consisted of 1360 papers, divided into 104 orals, 160 spotlights and the rest of 1096 papers as ...