Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》

AI will not wipe us out and should be used as a force for good, hundreds of experts urge

2023-07-22 01:45
AI does not represent “an existential threat to humanity”, hundreds of experts have urged in a new open letter. It is just the latest intervention by engineers and other academics amid an increasing interest and fear about the future of artificial intelligence. The new letter follows a recent intervention by technologists including Elon Musk, who in March was one of more than 1,000 experts who said that humanity was in danger from AI experiments. It called on companies to pause their work and consider the dangers - and asked governments to intervene if they would not. The new letter stands in opposition to that call. It says that AI “will be a transformative force for good if we get critical decisions about its development and use right”. The letter was organised by UK-based BCS, the Chartered Institute for IT. It said that it had launched the letter to counter “AI doom”. It says that the country “can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation”. By doing so it would not only help promote the UK as an AI destination but also ensure that AI was used for good, it said. The signatories includes a range of people from across society, including those who work in think tanks and public bodies and not specifically on artificial intelligence. But it also includes a range of engineers and others who have worked on artificial intelligence within academic and business contexts. BCS said that the calls including those in the letter signed by Elon Musk earlier this year could help play into the hands of bad actors. “The technologists and leaders who signed our statement believe AI won’t grow up like The Terminator but instead as a trusted co-pilot in learning, work, healthcare, entertainment,” said Rashik Parmar, the chief executive of BCS, The Chartered Institute for IT. “One way of achieving that is for AI to be created and managed by licensed and ethical professionals meeting standards that are recognised across international borders. “The public need confidence that the experts not only know how to create and use AI but how to use it responsibly. Yes, AI is a journey with no return ticket, but this letter shows the tech community doesn’t believe it ends with the nightmare scenario of evil robot overlords.” Read More Meta unveils its ChatGPT rival Llama xAI: Everything we know about Elon Musk’s new AI company Meet the AI human-like robots that can do our jobs
AI will not wipe us out and should be used as a force for good, hundreds of experts urge

AI does not represent “an existential threat to humanity”, hundreds of experts have urged in a new open letter.

It is just the latest intervention by engineers and other academics amid an increasing interest and fear about the future of artificial intelligence.

The new letter follows a recent intervention by technologists including Elon Musk, who in March was one of more than 1,000 experts who said that humanity was in danger from AI experiments. It called on companies to pause their work and consider the dangers - and asked governments to intervene if they would not.

The new letter stands in opposition to that call. It says that AI “will be a transformative force for good if we get critical decisions about its development and use right”.

The letter was organised by UK-based BCS, the Chartered Institute for IT. It said that it had launched the letter to counter “AI doom”.

It says that the country “can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation”. By doing so it would not only help promote the UK as an AI destination but also ensure that AI was used for good, it said.

The signatories includes a range of people from across society, including those who work in think tanks and public bodies and not specifically on artificial intelligence. But it also includes a range of engineers and others who have worked on artificial intelligence within academic and business contexts.

BCS said that the calls including those in the letter signed by Elon Musk earlier this year could help play into the hands of bad actors.

“The technologists and leaders who signed our statement believe AI won’t grow up like The Terminator but instead as a trusted co-pilot in learning, work, healthcare, entertainment,” said Rashik Parmar, the chief executive of BCS, The Chartered Institute for IT.

“One way of achieving that is for AI to be created and managed by licensed and ethical professionals meeting standards that are recognised across international borders.

“The public need confidence that the experts not only know how to create and use AI but how to use it responsibly. Yes, AI is a journey with no return ticket, but this letter shows the tech community doesn’t believe it ends with the nightmare scenario of evil robot overlords.”

Read More

Meta unveils its ChatGPT rival Llama

xAI: Everything we know about Elon Musk’s new AI company

Meet the AI human-like robots that can do our jobs

Tags tech