Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》

Talk of AI dangers has ‘run ahead of the technology’, says Nick Clegg

2023-07-19 17:19
Talk of artificial intelligence (AI) models posing a threat to humanity has “run ahead of the technology”, according to Sir Nick Clegg. The former Liberal Democrat leader and deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech. It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use. Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year. The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid Sir Nick Clegg Speaking on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run ahead of the technology. “I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself. “The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.” Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.” He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet. Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers. Read More Charity boss speaks out over ‘traumatic’ encounter with royal aide Ukraine war’s heaviest fight rages in east - follow live
Talk of AI dangers has ‘run ahead of the technology’, says Nick Clegg

Talk of artificial intelligence (AI) models posing a threat to humanity has “run ahead of the technology”, according to Sir Nick Clegg.

The former Liberal Democrat leader and deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.

It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.

Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.

The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid

Sir Nick Clegg

Speaking on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run ahead of the technology.

“I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.

“The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”

Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”

He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet.

Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers.

Read More

Charity boss speaks out over ‘traumatic’ encounter with royal aide

Ukraine war’s heaviest fight rages in east - follow live