Google to Pay Texas $8 Million to Settle Deceptive Pixel 4 Ad Claim
Google has agreed to shell out $8 million to Texas over deceptive ads it made
2023-05-14 02:45
Biden previews 2024 election pitch to young Black voters in Howard University commencement speech
President Joe Biden previewed his 2024 election pitch to young Black voters Saturday in commencement remarks at a Howard University graduation ceremony in Washington, DC, articulating his vision of a "future for all Americans,"
2023-05-14 02:21
AI pioneer warns UK is failing to protect against ‘existential threat’ of machines
One of the pioneers of artificial intelligence has warned the government is not safeguarding against the dangers posed by future super-intelligent machines. Professor Stuart Russell told The Times ministers were favouring a light touch on the burgeoning AI industry, despite warnings from civil servants it could create an existential threat. A former adviser to both Downing Street and the White House, Prof Russell is a co-author of the most widely used AI textbook and lectures on computer science at the University of California, Berkeley. He told The Times a system similar to ChatGPT – which has passed exams and can compose prose – could form part of a super-intelligence machine which could not be controlled. “How do you maintain power over entities more powerful than you – forever?” he asked. “If you don’t have an answer, then stop doing the research. It’s as simple as that. “The stakes couldn’t be higher: if we don’t control our own civilisation, we have no say in whether we continue to exist.” In March, he co-signed an open letter with Elon Musk and Apple co-founder Steve Wozniak warning of the “out-of-control race” going on at AI labs. The letter warned the labs were developing “ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control”. Prof Russell has worked for the UN on a system to monitor the nuclear test-ban treaty and was asked to work with the Government earlier this year. “The Foreign Office … talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome,” he said. “And then the government came out with a regulatory approach that says: ‘Nothing to see here… we’ll welcome the AI industry as if we were talking about making cars or something like that’.” He said making changes to the technical foundations of AI to add necessary safeguards would take “time that we may not have”. “I think we got something wrong right at the beginning, where we were so enthralled by the notion of understanding and creating intelligence, we didn’t think about what that intelligence was going to be for,” he said. We've sort of got the message and we're scrambling around trying to figure out what to do Professor Stuart Russell “Unless its only purpose is to be a benefit to humans, you are actually creating a competitor – and that would be obviously a stupid thing to do. “We don’t want systems that imitate human behaviour… you’re basically training it to have human-like goals and to pursue those goals. “You can only imagine how disastrous it would be to have really capable systems that were pursuing those kinds of goals.” He said there were signs of politicians becoming aware of the risks. “We’ve sort of got the message and we’re scrambling around trying to figure out what to do,” he said. “That’s what it feels like right now.” The government has launched the AI Foundation Model Taskforce which it says will “lay the foundations for the safe use of foundation models across the economy and ensure the UK is at the forefront of this pivotal AI technology”. Read More ChatGPT creators try to use artificial intelligence to explain itself – and come across major problems Artificial intelligence could ‘transform’ heart attack diagnosis, scientists say Hackers aim to find flaws in AI - with White House help ChatGPT user in China detained for creating and spreading fake news, police say Charity boss speaks out over ‘traumatic’ encounter with royal aide Ukraine war’s heaviest fight rages in east - follow live
2023-05-13 21:51
Is IShowSpeed dating transwoman Ava? Here’s what we know
Many have questioned Ava's gender after she was featured on many of IShowSpeed's videos
2023-05-13 19:54
Overtime Megan: Inside dating life of TikToker whose nudes were leaked online
Overtime Megan, who enjoys 2.5M followers on TikTok and more than 500k on Instagram, recently shared a picture with NBA star Josh Giddey
2023-05-13 18:58
Here's why xQc believes 19-year-old dating 17-year-old is 'just wrong'
xQc clarified why he believed it was improper for a 19-year-old to date a 17-year-old
2023-05-13 13:48
Elon Musk sparred with new CEO Linda Yaccarino in on-stage interview: 3 takeaways from the exchange
Elon Musk sat down in April for an on-stage interview with Linda Yaccarino, the advertising executive he named as Twitter's new chief executive on Friday
2023-05-13 13:28
When Elon sparred with Christine: 3 takeaways from their on-stage interview
Elon Musk sat down in April for an on-stage interview with Christine Yaccarino, the advertising executive he named as Twitter's new chief executive on Friday
2023-05-13 09:53
AI pioneer warns Government offering little defence against threat of technology
One of the pioneers of artificial intelligence has warned the Government is not safeguarding against the dangers posed by future super-intelligent machines. Professor Stuart Russell told The Times ministers were favouring a light touch on the burgeoning AI industry, despite warnings from civil servants it could create an existential threat. A former adviser to both Downing Street and the White House, Professor Russell is a co-author of the most widely used AI text book and lectures on computer science at the University of California, Berkeley. He told The Times a system similar to ChatGPT – which has passed exams and can compose prose – could form part of a super-intelligence machine which could not be controlled. “How do you maintain power over entities more powerful than you – forever?” he asked. “If you don’t have an answer, then stop doing the research. It’s as simple as that. “The stakes couldn’t be higher: if we don’t control our own civilisation, we have no say in whether we continue to exist.” In March, he co-signed an open letter with Elon Musk and Apple co-founder Steve Wozniak warning of the “out-of-control race” going on at AI labs. The letter warned the labs were developing “ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control”. Professor Russell has worked for the UN on a system to monitor the nuclear test-ban treaty and was asked to work with the Government earlier this year. “The Foreign Office… talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome,” he said. “And then the Government came out with a regulatory approach that says: ‘Nothing to see here… we’ll welcome the AI industry as if we were talking about making cars or something like that’.” He said making changes to the technical foundations of AI to add necessary safeguards would take “time that we may not have”. “I think we got something wrong right at the beginning, where we were so enthralled by the notion of understanding and creating intelligence, we didn’t think about what that intelligence was going to be for,” he said. We've sort of got the message and we're scrambling around trying to figure out what to do Professor Stuart Russell “Unless its only purpose is to be a benefit to humans, you are actually creating a competitor – and that would be obviously a stupid thing to do. “We don’t want systems that imitate human behaviour… you’re basically training it to have human-like goals and to pursue those goals. “You can only imagine how disastrous it would be to have really capable systems that were pursuing those kinds of goals.” He said there were signs of politicians becoming aware of the risks. “We’ve sort of got the message and we’re scrambling around trying to figure out what to do,” he said. “That’s what it feels like right now.” The Government has launched the AI Foundation Model Taskforce which it says will “lay the foundations for the safe use of foundation models across the economy and ensure the UK is at the forefront of this pivotal AI technology”. Read More Charity boss speaks out over ‘traumatic’ encounter with royal aide Ukraine war’s heaviest fight rages in east - follow live TikTok ‘does not want to compete with BBC for Eurovision final viewers’ Eurovision’s preparations for potential Russia cyberthreat ‘in good place’ UK-based tech company claims quantum computing ‘breakthrough’
2023-05-13 09:51
Ex-ByteDance Exec Claims Reporting Illegal Conduct Got Him Fired
ByteDance Inc.’s former head of engineering in the US said in a lawsuit he was fired for voicing
2023-05-13 08:48
Judge sides with Ellison in Oracle shareholder suit over NetSuite acquisition
A Delaware judge has ruled in favor of Oracle founder Larry Ellison in a shareholder lawsuit alleging that he coerced the company into paying a grossly inflated price to acquire software corporation NetSuite
2023-05-13 07:29
AMD Hits New High in x86 Chip Market Amid Intel Slump
Even though PC demand remains limp, AMD has something to celebrate: The chip maker’s market
2023-05-13 05:28