A long list of AI scientists, engineers, and other notable figures have signed a statement warning of the urgent risks artificial intelligence poses, including extinction.
The "Statement on AI Risk" was posted on the Center for AI Safety (CAIS) website and reads as follows:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Such a statement could easily be viewed as hyperbole, but then you scan the list of experts and notable figures who have put their signature to it, and it's hard not to take it seriously. Those signatories include Demis Hassabis (CEO, Google Deepmind), Sam Altman (CEO, OpenAI), and Dario Amodei (CEO, Anthropic).
The list of signatories continues to grow as experts and notable figures (executives, professors, leaders) are being encouraged to sign if they agree. At the time of writing, there are over 120 signatures, including key figures such a Peter Norvig, Bruce Schneier, and multiple others from DeepMind, OpenAI, and a slew of AI-focused companies and universities.
CAIS exists to "reduce societal-scale risks from artificial intelligence" and believes "AI safety remains remarkably neglected" while society is "ill-prepared to manage the risks from AI." With that in mind, CAIS is creating a research ecosystem and advising industry leaders and policymakers in a bid to establish guidelines for "safe and responsible deployment of AI."
Last week, Microsoft called on the US and other countries to establish government agencies dedicated to regulating AI. OpenAI's Sam Altman also asked for regulation during a congressional hearing earlier this month, while key technologists have some fears over what AI means for the human race. Meanwhile, most Americans know what ChatGPT is, but few of them actually use it.