Learn about use cases and applied artificial intelligence projects with an impact in the real world.


In the Last 48 hours there's been 13 influencers like Jaguilam2003 and Miles_Brundage, discussing topics such as #AI, #MachineLearningand #BigData.



Trends


Top hashtags

Top influencers

Jaguilam2003
Miles_Brundage
dmonett
DeepMindAI
NathanBenaich
Sam_L_Shead
andyjayhawk
erichorvitz
jack

Top sources

arxiv.org
datasciencecentral.com
cloudcomputing.sys-con.com
deepmind.com
iiot-world.com
info.axiossystems.com
knightcolumbia.org
people.idsia.ch
sciencedirect.com

News

The 2019 AI Index report is here!

On Dec 12, 2019
@erichorvitz shared
Just released: 2019 AI Index https://t.co/rK5NrEs9cp. #AIIndex was created as a project of the One Hundred Year Study on AI @Stanford https://t.co/a3nxxMYx3A @IndexingAI @StanfordHAI #AIIndex2019 #AI100 @mlittmancs @compcomcon @PartnershipAI @yshoham @AiCommission
Open

The Economy Covers three specific topics: jobs, investment, and corporate activity. Public Perception Covers public perception of central banks, global governments, and the corporate world. Societal Considerations Examines ethical challenges, global news on AI ethics, and AI applications ...

hai.stanford.edu
On Dec 12, 2019
@erichorvitz shared
Just released: 2019 AI Index https://t.co/rK5NrEs9cp. #AIIndex was created as a project of the One Hundred Year Study on AI @Stanford https://t.co/a3nxxMYx3A @IndexingAI @StanfordHAI #AIIndex2019 #AI100 @mlittmancs @compcomcon @PartnershipAI @yshoham @AiCommission
Open

The 2019 AI Index report is here!

The Economy Covers three specific topics: jobs, investment, and corporate activity. Public Perception Covers public perception of central banks, global governments, and the corporate ...


On Dec 12, 2019
@prostheticknowl shared
RT @xsteenbrugge: Whoa, StyleGANv2 is out! - Significantly better samples (better FID scores & reduced artifacts) - No more progressive growing - Improved Style-mixing - Smoother interpolations (extra regularization) - Faster training Paper: https://t.co/D6msTHwpIp Github: https://t.co/kw6X8NvjXC https://t.co/5Cnlg9j91V
Open

StyleGAN2 — Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation. Contribute to NVlabs/stylegan2 development by creating an account on GitHub.

On Dec 12, 2019
@AINowInstitute shared
RT @katecrawford: The AI Now 2019 Report is out! This is an enormous project from a team of researchers who survey what's happening in AI: in a social, political, and ecological frame. Want strong recommendations for industry, government & research? Start here: 📌 https://t.co/k0z1KvsPST
Open

AI Now 2019 Report

Kate Crawford , AI Now Institute, New York University, Microsoft Research Roel Dobbe , AI Now Institute, New York University Theodora Dryer , AI Now Institute, New York University Genevieve ...

On Dec 11, 2019
@AINowInstitute shared
RT @katecrawford: Excited to share our new publication: AI SYSTEMS AS STATE ACTORS. In this @ColumLRev essay, @Lawgeek & I argue that AI systems that directly influence government decisions should be liable for any constitutional harms they cause. https://t.co/q3v0hVmk08
Open

Microsoft Word - C&S v4.3

See, e.g., Idaho Code § 19-1910 (2019) (establishing that all pretrial risk assessment tools should be transparent and open to public assessment); DJ Pangburn, How to Lift the Veil off ...

On Dec 11, 2019
@dmonett shared
AFAIK, not true 👉: "In 1986 Geoffrey Hinton published a landmark paper introducing backpropagation, a new method for training neural networks." BP was invented by Seppo Linnainmaa 16 years earlier, in 1970. See Section 0 in https://t.co/XcRfM9BnYP for more. #AI @SchmidhuberAI https://t.co/t3Wx2s1ucq
Open

Deep Learning: Our Miraculous Year 1990-1991

First Very Deep NNs, Based on Unsupervised Pre-Training (1991) My first idea to overcome the Deep Learning Problem mentioned above was to facilitate supervised learning in deep RNNs by ...

On Dec 11, 2019
@erichorvitz shared
Latest #MSRPodcast: Besmira @besanushi shares thoughts on pathways ahead on #AI in the open world, including opportunities to create reliable systems & enhance human-AI collaboration: https://t.co/PRpSKU51xR @compcomcon @ACM_FCA @ieee #hcomp #AetherCommittee @MSFTResearch @ETH_en
Open

Adaptive systems, machine learning and collaborative AI with Dr. Besmira Nushi

Episode 102 | December 11, 2019 - With all the buzz surrounding AI, it can be tempting to envision it as a stand-alone entity that optimizes for accuracy and displaces human capabilities. ...

On Dec 12, 2019
@Miles_Brundage shared
RT @ram_ssk: 📢📢With @BKCHarvard, @msftsecurity released a battle tested taxonomy of how ML systems fail, by attackers or inherent design, for engineers and policy makers Blog: https://t.co/uA3VbVvPor Paper: https://t.co/dkc5efr7cK With @d_obrien @KendraSerra @salome_viljoen_ @jsnover 1/ https://t.co/5HO1PDafa2
Open

Solving the challenge of securing AI and machine learning systems

Today, Microsoft is publishing a series of materials we believe will contribute to solving a major challenge to securing artificial intelligence and machine learning systems. The ...

On Dec 12, 2019
@Miles_Brundage shared
RT @timhwang: in @foreignpolicy today talking about the “artificial intelligence arms race”, which isn’t actually about artificial intelligence doesn’t really involve arms and isn’t a race https://t.co/962HleMaWc
Open

Artificial Intelligence Isn’t an Arms Race

And by treating it like one, the United States could miss out on its real potential.

On Dec 12, 2019
@zacharylipton shared
RT @steverab: Interested in detecting dataset shift in high dimensional data and how to characterize these shifts? Consider coming by poster #54 during the morning session @NeurIPSConf. Work with @guennemann and @zacharylipton. Poster: https://t.co/hi2G0wzNYH Paper: https://t.co/dU1Gi6saiy
Open

Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift

Label Classifiers (BBSDs ⊳ and BBSDh ⊲): Motivated by recent results achieved by black box shift detection (BBSD) [29], we also propose to use the outputs of a (deep network) label ...

On Dec 10, 2019
@Jaguilam2003 shared
RT @KirkDBorne: A Framework for #MachineLearning: https://t.co/DN9SjWCEMb ——————— #BigData #Analytics #DataScience #DataEngineering #AI #AIstrategy #DataStrategy #AnalyticsStrategy #abdsc ——————— Source for graphic: https://t.co/56NVOzoGBE https://t.co/SDrR1VDLo8
Open

Materials discovery and design using machine learning

The screening of novel materials with good performance and the modelling of quantitative structure-activity relationships (QSARs), among other issues,…

Browse topics

Get updates live! Follow us on

Choose your newsletters

On Dec 11, 2019
@kaliouby shared
When @Forbes asked me what’s next in #AI... I said #data synthesis. In “120 #AI Predictions for 2020”, I mention the need to augment and scale data, not to eliminate the need for real-world data collection but to complement it. #EmotionAI @GilPress https://t.co/1vQCD7abYo
Open

120 AI Predictions For 2020

120 predictions about the state of AI in 2020

On Dec 11, 2019
@Miles_Brundage shared
RT @m_c_elish: Hot off the press: Our #FATML case study on AI in healthcare. Tl;dr: developing ML systems is a ⚡️sociotechnical⚡️problem, people and institutions shape use, explainability is not the only means to accountability + much more! https://t.co/l33QLVNbRJ @MarkSendak @JFutoma #ML4HC
Open

Microsoft Word - beyond_interpretability_13.docx

“The Human Body is a Black Box”: Supporting Clinical Decision-Making with Deep Learning Mark Sendak Duke Institute for Health Innovation Durham NC USA [email protected] Joseph Futoma† ...

On Dec 11, 2019
@thinkmariya shared
The importance of AI ethics we currently have brings us to more specific and applicable topics in this research area. We've summarized 12 important AI ethics research papers from 2019 representative of this trend. #AI #ethics https://t.co/RoGX2ZkkFU
Open

Top 12 AI Ethics Research Papers Introduced In 2019

The research papers introduced in 2019 define comprehensive terminology for communicating about ML fairness, go from general AI principles to specific tensions that arise when implementing ...

On Dec 11, 2019
@AINowInstitute shared
RT @Lawgeek: Excited to share @katecrawford and my new @ColumLRev essay, "AI Systems as State Actors" where we argue vendors of AI systems that directly influence government decisions should be considered state actors for purposes of constitutional liability lawsuits. https://t.co/DfzggcnOs9
Open

AI SYSTEMS AS STATE ACTORS

The full text of this essay may be found by clicking the PDF link to the left. Introduction Advocates and experts are increasingly concerned about the rapid intro­duction of artificial ...

On Dec 11, 2019
@salesforce shared
RT @TechRepublic: Salesforce's Bill Patterson explains the company's three areas of focus around voice tech for conversations with customers or employees. https://t.co/N4UmufJCUJ
Open

How tech augments the human side of customer service

Salesforce's Bill Patterson explains the company's three areas of focus around voice tech for conversations with customers or employees.

On Dec 12, 2019
@Jaguilam2003 shared
RT @nordicinst: A call to action on artificial intelligence. #aiethics #ArtificialIntelligence #AI https://t.co/vmfrWSRndc
Open

A call to action on artificial intelligence

Preparing employees for jobs of the future will require leaders in business, government, and higher education to work together. That was a major…

On Dec 12, 2019
@salesforce shared
Human-centric service is… ✔️personal ✔️intelligent ✔️trusted ✔️inclusive service at scale. Here are the four pillars everyone should focus on: https://t.co/bYW3I2xRFi https://t.co/baLBsvUWPd
Open

Get our weekly newsletter for the latest business insights.

Human-centric service is personal, intelligent, trusted, and inclusive service at scale. From your customers to your employees, learn the four pillars of human-centric service everyone ...

On Dec 12, 2019
@guardiantech shared
AI expert calls for end to UK use of ‘racially biased’ algorithms https://t.co/lgtJO6gP53
Open

AI expert calls for end to UK use of ‘racially biased’ algorithms

Prof Noel Sharkey says systems so infected with biases they cannot be trusted

On Dec 10, 2019
@Jaguilam2003 shared
RT @KirkDBorne: A Framework for #MachineLearning: https://t.co/DN9SjWCEMb ——————— #BigData #Analytics #DataScience #DataEngineering #AI #AIstrategy #DataStrategy #AnalyticsStrategy #abdsc ——————— Source for graphic: https://t.co/56NVOzoGBE https://t.co/SDrR1VDLo8
Open

An ML Framework

A machine learning solution can be broadly divided into 3 parts. A typical ML exercise would involve experimentation and iteration of all the 3 parts together…

On Dec 12, 2019
@davegershgorn shared
RT @drewharwell: Struggling to "increase diversity" in your ads? Use AI to generate fake photos of people from "different ethnic backgrounds" (without having to actually work with those people) https://t.co/HID76iLt3j https://t.co/iuDvwPVVVe
Open

Generate worry-free, diverse models on-demand using AI

The most practical way to get high-quality generated faces for commercial content. • Fully tagged and searchable faces • Forget about sourcing, model releases, and hassles! • AI-flaw ...

On Dec 11, 2019
@NathanBenaich shared
RT @graphcoreai: Huge effort from our customer Citadel with their newly published Arxiv paper 'Dissecting the Graphcore IPU Architecture via Microbenchmarking'. It is well worth a read. Big thanks from Graphcore! @citsecurities https://t.co/CgNcqFFlHI
Open

Dissecting the Graphcore IPU Architecture via Microbenchmarking

This report focuses on the architecture and performance of the Intelligence Processing Unit (IPU), a novel, massively parallel platform recently introduced by Graphcore and aimed at ...