Artificial intelligence continues to evolve at record speeds-- now all eyes are on the most powerful technology of them all: language artificial intelligence. With language technologies becoming ingrained in our everyday lives, an ethical debate has ensued. What’s more? In the era of misinformation, it appears that large language models (LLMs) are likely to contribute to the spread of “fake news”. On top of that, people don’t actually know who’s behind the information they receive-- was it written by some kind of technology program or a real person?

Studies have revealed that these language models can produce sexist, racist and even abusive ideas... going so far as to encourage self-harm and sexual abuse. Not to mention they make massive generalizations like doctors are always men and nurses have to be women. The real question is, if there is a way to receive the benefits of this new technology, while avoiding the harmful consequences. Can we still manage, or is it already too late?

Covering up the Truth About Language Artificial Intelligence

Google already made headlines last year when they fired their Ethical AI co-lead Timnit Gebru for refusing to retract a paper she wrote, discussing the very real dangers of Language AI. Considering Google's notorious research censorship, and their plans to release a new AI system, which will be integrated into features including the search portal and voice assistant: LaMDA, we should all be concerned.

Language artificial intelligence has limitless possibilities--and that’s only the tip of the iceberg. What would happen if such dangerous technology was implemented across the features of such a global company-- or multiple global companies for that matter? Google isn't alone in adopting this technology and bringing forth questions regarding AI ethics. Microsoft, Facebook, and Open AI among others did the same, which resulted in some very sinister results in Open AI’s game AI Dungeon-- check the article we wrote about it here.

Deploying LLMs Without Harmful Consequences

More than 500 researchers from around the world have joined forces as part of the BigScience project led by Huggingface, to answer a vital question: "How and when should LLMs be developed and deployed to reap their benefits without their harmful consequences?" While it is unlikely we can stop LLM's exponential growth, we can try to make sure the AI technology evolves and is deployed in a more ethical, and beneficial way.

Read the original article here.

Take a look at more top articles, trends and experts by signing up to our newsletter—By getting to choose which topics interest you the most, you get the latest news delivered with ease: https://essentials.news/ai/my-essentials