shot-button
Subscription Subscription
Home > Technology News > Artificial intelligence posing a threat to existence of humanity experts warn

Artificial intelligence posing a threat to existence of humanity, experts warn

Updated on: 31 May,2023 03:13 PM IST  |  New Delhi
IANS |

In March this year, several top entrepreneurs and AI researchers, including Tesla CEO Musk and Steve Wozniak, Co-founder of Apple, wrote an open letter, asking all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least six months

Artificial intelligence posing a threat to existence of humanity, experts warn

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Photo Courtesy: iStock

Top researchers, experts and CEOs (including Sam Altman of OpenAI) have warned that artificial intelligence could lead to the extinction of humanity. In a 22-word statement, they said that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."


The statement was published by US-based nonprofit, the Center for AI Safety, and was co-signed by Google DeepMind CEO Demis Hassabis as well as Geoffrey Hinton and Youshua Bengio, two of the three AI researchers who won the 2018 Turing Award for their work on AI.


"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI's most severe risks," the Center said.


The statement "aims to overcome this obstacle and open up the discussion".

It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously, the stakeholders said.

In March this year, several top entrepreneurs and AI researchers, including Tesla CEO Musk and Steve Wozniak, Co-founder of Apple, wrote an open letter, asking all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least six months.

Last week, Altman said that now is a good time to start thinking about the governance of superintelligence - future AI systems dramatically more capable than even artificial general intelligence (AGI).

Altman stressed that the world must mitigate the risks of today's AI technology too, "but superintelligence will require special treatment and coordination".

"Given the possibility of existential risk, we can't just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example," he noted.

Altman earlier admitted that if AI technology goes wrong, it can go quite wrong, as US senators expressed their fears about AI chatbots like ChatGPT.

Also Read: A new AI voice coach may be a useful tool in mental health treatment: Study

This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!

Register for FREE
to continue reading !

This is not a paywall.
However, your registration helps us understand your preferences better and enables us to provide insightful and credible journalism for all our readers.

Mid-Day Web Stories

Mid-Day Web Stories

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK