Artificial Intelligence

AI Research News

Discover the latest AI research & find out how AI, Machine Learning and advanced algorithms impact our lives, our jobs and the economy, all thanks to expert articles that include discussion on the potential, limits and consequences of AI.

Top news of the week: 04.05.2022.

#automation
#AI
#CVPR
#Kaggle
#CLOVA
#NAACL2022
#NAVER

Research

@jeremyphoward shared
On Apr 27, 2022
RT @LuminideInc: New blog post! We show how Luminide's #automation was used to achieve higher accuracy #AI models and place Top 1% in a #CVPR #Kaggle competition: https://t.co/01Ee4kkCaY https://t.co/VgmAu6Bs1k
Open
Better Automation for Higher Accuracy AI Models

Better Automation for Higher Accuracy AI Models

At Luminide, we are working hard to build a platform that will revolutionize AI development. In this post, we are excited to share some details with you.

@ylecun shared
On May 2, 2022
RT @JoramKeijser: Efficient optimization in e.g. deep networks uses gradients, yet biological evolution uses random mutations - at least according to the textbooks. Here I review classic evidence for, and recent evidence against, random mutations. Blog: https://t.co/BlosxnGimg Thread: 👇1/n https://t.co/QZgoVIBYzP
Open
Does evolution estimate gradients?

Does evolution estimate gradients?

A gradient-based (left) and a gradient-free (right) algorithm that minimise the same noisy quadratic loss function. Color indicates performance.

@stanfordnlp shared
On Apr 27, 2022
For tomorrow's Stanford NLP Seminar, we're excited to host @EthanJPerez who will be talking about aligning language models with human preferences. Join us over zoom tomorrow at 11 am PT. Registration: https://t.co/adrPniZRV7; Abstract: https://t.co/0LA7LD51fM https://t.co/TKrVZzmnDN
Open
The Stanford Natural Language Processing Group

The Stanford Natural Language Processing Group

Aligning Language Models with Human Preferences Ethan Perez, NYU Venue: Zoom (link hidden) Abstract Self-supervised learning objectives are highly effective at pretraining …

@tqchenml shared
On Apr 29, 2022
RT @ABridgwater: The low-no-code series - OctoML: Why ML is the next frontier https://t.co/MlPdpxho4a via @computerweekly @OctoML https://t.co/iZVDEW6ABM
Open
The low-no-code series - OctoML: Why ML is the next frontier

The low-no-code series - OctoML: Why ML is the next frontier

This is a guest post for the Computer Weekly Developer Network written by Jason Knight in his capacity as co-founder & CPO of OctoML – a company known for its platform and solutions ...

@dnouri shared
On May 2, 2022
RT @HochreiterSepp: ArXiv https://t.co/aKlrNe0pLT: Visual language model for few-shot learning (Flamingo) for captioning, visual dialogue, visual question answering w. few inputs/outputs. Leverages large pretrained vision-only and language-only models (70B Chinchilla LM). Impressive results. https://t.co/ueNwKKNNou
Open
Flamingo: a Visual Language Model for Few-Shot Learning

Flamingo: a Visual Language Model for Few-Shot Learning

Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce ...

@stanfordnlp shared
On May 2, 2022
RT @BulwarkOnline: Technological development always outpaces ethical reflection leaving society exposed to dangers— if not from bad actors, perhaps from immature ones. https://t.co/xi1RkApUyi
Open
How AI Is Being Transformed by ‘Foundation Models’

How AI Is Being Transformed by ‘Foundation Models’

And the need for ethical self-governance among AI researchers.

@jeremyphoward shared
On Apr 30, 2022
I'm a bit late to the party, but I just noticed that @dcpage3 has been writing an awesome series on Vision Transformers over the last few years: https://t.co/XbeCtU3ue7
Open
Vision Transformers 1: Low Earth Orbit Satellites

Vision Transformers 1: Low Earth Orbit Satellites

At Myrtle.ai, we have been investigating whether Vision Transformers would make a good candidate algorithm for deployment on a Low Earth Orbit (LEO) satellite system. LEO is a commonly used ...

@kchonyc shared
On May 3, 2022
RT @drafity89: Have you ever wondered what the effect of pre-training corpora 📚 on the in-context learning ability 😯 of large LMs is? Below is a quick summary of our work in #NAACL2022 👇 - from #NAVER #CLOVA w/ @kchonyc - blog post: https://t.co/VdSPEPLV6Y (1/5)
Open
On the effect of pre-training corpora on in-context learning by large-scale language model.

On the effect of pre-training corpora on in-context learning by large-scale language model.

We investigated the effect of the source and size of pre-training corpora on in-context few-shot and zero-shot learning in HyperCLOVA, a Korean AI platform based on GPT-3.