Artificial Intelligence

AI Research News

Discover the latest AI research & find out how AI, Machine Learning and advanced algorithms impact our lives, our jobs and the economy, all thanks to expert articles that include discussion on the potential, limits and consequences of AI.

Top news of the week: 10.08.2022.

Data
Machine learning
Scientific method
Learning
Reinforcement learning
Neural network

@weballergy shared
On Aug 5, 2022
RT @jon_barron: We've finally released code for three of our CVPR2022 papers: mip-NeRF 360, Ref-NeRF, and RawNeRF. Instead of three separate releases, we've done something a little unusual and merged them into a single repo. Excited to see what people do with this! https://t.co/eEfAAHOQni
Open
MultiNeRF: A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF

MultiNeRF: A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF - GitHub - google-research/multinerf: A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF

@weballergy shared
On Aug 8, 2022
RT @MetaAI: Here we introduce a simple new unsupervised approach to data ranking that’s competitive with previous expensive supervised approaches. Our work demonstrates that simply collecting more uncurated data is highly inefficient and can be dramatically improved. https://t.co/n8Em9QhXpb
Open
Beyond neural scaling laws: beating power law scaling via data pruning

Beyond neural scaling laws: beating power law scaling via data pruning

Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning. ...

@ericjang11 shared
On Aug 8, 2022
RT @pbloemesquire: New blogpost! Everything you ever wanted to know about the Singular Value Decomposition and a lot more. Including: - The relation to PCA - The matrix rank (decomposition) - The pseudo-inverse - The Eckhart-Young-Mirsky theorem https://t.co/grIVwBJZEA https://t.co/1YgON8oajz
Open
The Singular Value Decomposition

The Singular Value Decomposition

we can state this as an optimization problem: This problem simply asks for the input 𝐱 for which the resulting vector 𝐌𝐱 has maximal magnitude, subject to the constraint that …

@michael_nielsen shared
On Aug 6, 2022
Enjoyed this, on the value and current limits of AlphaFold: https://t.co/brsu7HF46k (The headline is a bit clickbaity.)
Open
Why AlphaFold won’t revolutionise drug discovery

Why AlphaFold won’t revolutionise drug discovery

Protein structure prediction is a hard problem, but even harder ones remain

@stanfordnlp shared
On Aug 6, 2022
RT @i_amanchadha: 📚 Natural Language Processing from Stanford University: Distilled Notes 👉🏼 https://t.co/e41vRgZIfT - NLP is one of the most popular #AI domains, widely used from language translation to auto-complete to voice assistants. - Presenting notes from Stanf…https://t.co/q9pPB2haWf
Open
Aman Chadha’s Post

Aman Chadha’s Post

📚 Natural Language Processing from Stanford University: Distilled Notes 👉🏼 http://nlp.aman.ai - NLP is one of the most popular #AI domains, widely... 15 comments on LinkedIn

@weballergy shared
On Aug 8, 2022
RT @hardmaru: Multimodal Learning with Transformers: A Survey Survey of Transformer methods oriented at multimodal data. Includes a background of multimodal learning, Transformer ecosystem, review of Vanilla Transformer, Vision Transformer and multimodal Transformers. https://t.co/KmRrAeH24m https://t.co/0z8xTXDUnT
Open
Multimodal Learning with Transformers: A Survey

Multimodal Learning with Transformers: A Survey

Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks. Thanks to the recent prevalence of multimodal applications and big data, ...

@hugo_larochelle shared
On Aug 8, 2022
RT @ynd: 📣 Excited to announce the First Workshop on Interpolation Regularizers such as Mixup at @NeurIPSConf. Paper submission deadline: September 22, 2022 Speakers: @chelseabfinn, @prfsanjeevarora, Kenji Kawaguchi, Youssef Mroueh, Alex Lamb. https://t.co/wf48y2uECM 🧵👇
Open
First Workshop on Interpolation Regularizers and Beyond

First Workshop on Interpolation Regularizers and Beyond

Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly ...

@AndrewYNg shared
On Aug 5, 2022
Reinforcement learning (RL) algorithms are quite finicky -- sensitive to picking hard-to-tune hyperparameters -- compared to supervised deep learning, which has become much more robust the past decade. Will RL progress similarly? My thoughts in The Batch: https://t.co/WVDFYjFXWT
Open
Subscribe to The Batch

Subscribe to The Batch

In this issue of The Batch: Autonomous aircraft in the UK are getting their own superhighway | Worldwide collaboration produced the biggest open source language model