ChatGPT creators OpenAI form ‘Preparedness’ group to get ready for ‘catastrophe’
OpenAI, the creators of ChatGPT, have formed a new group to prepare for the “catastrophic risks” of artificial intelligence. The “Preparedness” team will aim to “track, evaluate, forecast and protect against catastrophic risks”, the company said. Those risks include artificial intelligence being used to craft powerful persuasive messages, to endanger cybersecurity and to be used in nuclear and other kinds of weapons. The team will also work against “autonomous replication and adaptation”, or ARA – the danger that an AI would gain the power to be able to copy and change itself. “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI said. “But they also pose increasingly severe risks.” Avoiding those dangerous situations will mean building frameworks to predict and then protect people against the dangerous capabilities of new artificial intelligence systems, OpenAI said. That will be one of the tasks of the new team. At the same time, OpenAI launched a new “Preparedness Challenge”. That encourages people to think about “the most unique, while still being probable, potentially catastrophic misuse of the model” such as using it to shut down power grids, for instance. Particularly good submissions of ideas for the malicious uses of artificial intelligence will win credits to use on OpenAI’s tools, and the company suggested that some of those people could be hired to the team. It will be led by Aleksander Madry, an AI expert from Massachusetts Institute of Technology, OpenAI said. OpenAI revealed the new team as part of its contribution to the UK’s AI Safety Summit, which will happen next week. OpenAI was one of a range of companies that have made commitments on how it will ensure the safe use of artificial intelligence. Read More WhatsApp update will change how you log in forever ChatGPT creator quietly changes core values from ‘thoughtful’ to ‘scrappy’
OpenAI, the creators of ChatGPT, have formed a new group to prepare for the “catastrophic risks” of artificial intelligence.
The “Preparedness” team will aim to “track, evaluate, forecast and protect against catastrophic risks”, the company said.
Those risks include artificial intelligence being used to craft powerful persuasive messages, to endanger cybersecurity and to be used in nuclear and other kinds of weapons. The team will also work against “autonomous replication and adaptation”, or ARA – the danger that an AI would gain the power to be able to copy and change itself.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI said. “But they also pose increasingly severe risks.”
Avoiding those dangerous situations will mean building frameworks to predict and then protect people against the dangerous capabilities of new artificial intelligence systems, OpenAI said. That will be one of the tasks of the new team.
At the same time, OpenAI launched a new “Preparedness Challenge”. That encourages people to think about “the most unique, while still being probable, potentially catastrophic misuse of the model” such as using it to shut down power grids, for instance.
Particularly good submissions of ideas for the malicious uses of artificial intelligence will win credits to use on OpenAI’s tools, and the company suggested that some of those people could be hired to the team. It will be led by Aleksander Madry, an AI expert from Massachusetts Institute of Technology, OpenAI said.
OpenAI revealed the new team as part of its contribution to the UK’s AI Safety Summit, which will happen next week. OpenAI was one of a range of companies that have made commitments on how it will ensure the safe use of artificial intelligence.
Read More
WhatsApp update will change how you log in forever
ChatGPT creator quietly changes core values from ‘thoughtful’ to ‘scrappy’