AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Photo Courtesy: iStock
Top researchers, experts and CEOs (including Sam Altman of OpenAI) have warned that artificial intelligence could lead to the extinction of humanity. In a 22-word statement, they said that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The statement was published by US-based nonprofit, the Center for AI Safety, and was co-signed by Google DeepMind CEO Demis Hassabis as well as Geoffrey Hinton and Youshua Bengio, two of the three AI researchers who won the 2018 Turing Award for their work on AI.
"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI's most severe risks," the Center said.
OpenAI is rolling out new features that allows ChatGPT to see, hear and speak
Gmail adds 'Select all' option on Android, to let you select 50 emails at once
YouTube ‘Dream Screen’ tool to soon let AI create videos for Shorts
Engineering, sales jobs to benefit most from AI in next 18 months: Report
The statement "aims to overcome this obstacle and open up the discussion".
It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously, the stakeholders said.
In March this year, several top entrepreneurs and AI researchers, including Tesla CEO Musk and Steve Wozniak, Co-founder of Apple, wrote an open letter, asking all AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least six months.
Last week, Altman said that now is a good time to start thinking about the governance of superintelligence - future AI systems dramatically more capable than even artificial general intelligence (AGI).
Altman stressed that the world must mitigate the risks of today's AI technology too, "but superintelligence will require special treatment and coordination".
"Given the possibility of existential risk, we can't just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example," he noted.
Altman earlier admitted that if AI technology goes wrong, it can go quite wrong, as US senators expressed their fears about AI chatbots like ChatGPT.
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever