Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》
How much of a threat does AI really pose? Get your ticket for our free exclusive event
How much of a threat does AI really pose? Get your ticket for our free exclusive event
It seems that every day we are hearing about more businesses across the globe adopting the use of AI. While Tim Cook has recently revealed Apple is building AI into ‘every product’, Netflix has listed a controversial AI job paying $900,000 amid strike action from actors against the technology. It’s not just big businesses investing in artificial intelligence, however. Multiple studies are beginning to emerge tauting AI’s benefits. One claimed AI can read breast cancer screening images and another argued it could help revolutionise the way children are taught. So where does this leave us? How worried are we supposed to be about AI? Is it an exciting development in technology or is it a genuine threat to humanity as we know it? Want to keep updated with the latest news in tech? Sign up to our weekly email here As the world continues to increase the exploration, use and development of artificial intelligence The Independent’s tech team is going to examine exactly what it means for our workplaces, our ways of communication and our day to days lives. In The Independent’s virtual event series our tech editor Andrew Griffin will be examining exactly what threat AI poses as it continues to evolve. He will be joined by his deputy Anthony Cuthbertson, as well as a panel of other experts, to comment on the latest from the world of artificial intelligence and to answer your burning questions. The panel will discuss the advantages and disadvantages of AI, the moral and legal issues surrounding it, the latest developments on the horizon and what the future of AI hold for the planet. The event will take place on August 17 on Zoom and will start at 6.30pm. For more information and to sign up for a free ticket click here. You can also post questions in the comments of this article. Read More AI-driven cyberattack can now steal passwords with near 100 per cent accuracy Google Assistant will be ‘supercharged’ with AI like ChatGPT and Bard Tired of proving you’re not a robot? Say goodbye to Captcha boxes
2023-08-12 00:20
Elon Musk 'likes' trending #BanTheADL posts as white supremacist ad runs on platform
Elon Musk 'likes' trending #BanTheADL posts as white supremacist ad runs on platform
Over the past 24 hours, the hashtag #BanTheADL has been trending on X, the platform
2023-09-02 06:28
Factorial Earns UN 38.3 Certification to ship 100Ah Lithium-metal Solid-State Battery
Factorial Earns UN 38.3 Certification to ship 100Ah Lithium-metal Solid-State Battery
WOBURN, Mass.--(BUSINESS WIRE)--May 23, 2023--
2023-05-23 21:25
Canada demands Meta lift 'reckless' ban on news to allow fires info to be shared
Canada demands Meta lift 'reckless' ban on news to allow fires info to be shared
By David Ljunggren OTTAWA (Reuters) -The Canadian government on Friday demanded that Meta lift a "reckless" ban on domestic news
2023-08-19 03:23
Sizzling Temperatures Trigger UK Health Alert for the Weekend
Sizzling Temperatures Trigger UK Health Alert for the Weekend
Soaring temperatures caused by a blast of hot air led the UK to post fresh health warnings through
2023-06-09 15:18
Voices: The real reason companies are warning that AI is as bad as nuclear war
Voices: The real reason companies are warning that AI is as bad as nuclear war
They are 22 words that could terrify those who read them, as brutal in their simplicity as they are general in their meaning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That is the statement from San Francisco-based non-profit the Center for AI Safety, and signed by chief executives from Google Deepmind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research. The fact that the statement has been signed by so many leading AI researchers and companies means that it should be heeded. But it also means that it should be robustly examined: why are they saying this, and why now? The answer might take some of the terror away (though not all of it). Writing a statement like this functions as something like a reverse marketing campaign: our products are so powerful and so new, it says, that they could wipe out the world. Most tech products just promise to change our lives; these ones could end it. And so what looks like a statement about danger is also one that highlights just how much Google, OpenAI and more think they have to offer. Warning that AI could be as terrible as pandemics also has the peculiar effect of making artificial intelligence's dangers seem as if they just arise naturally in the world, like the mutation of a virus. But every dangerous AI is the product of intentional choices by its developers – and in most cases, from the companies that have signed the new statement. Who is the statement for? Who are these companies talking to? After all, they are the ones who are creating the products that might extinguish life on Earth. It reads a little like being hectored by a burglar about your house’s locks not being good enough. None of this is to say that the warning is untrue, or shouldn't be heeded; the danger is very real indeed. But it does mean that we should ask a few more questions of those warning us about it, especially when they are conveniently the companies that created this ostensibly apocalyptic tech in the first place. AI doesn't feel so world-destroying yet. The statement's doomy words might come as some surprise to those who have used the more accessible AI systems, such as ChatGPT. Conversations with that chatbot and others can be funny, surprising, delightful and sometimes scary – but it's hard to see how what is mostly prattle and babble from a smart but stupid chatbot could destroy the world. They also might come as a surprise to those who have read about the many, very important ways that AI is already being used to help save us, not kill us. Only last week, scientists announced that they had used artificial intelligence to find new antibiotics that could kill off superbugs, and that is just the beginning. By focusing on the "risk of extinction" and the "societal-scale risk" posed by AI, however, its proponents are able to shift the focus away from both the weaknesses of actually existing AI and the ethical questions that surround it. The intensity of the statement, the reference to nuclear war and pandemics, make it feel like we are at a point equivalent with cowering in our bomb shelters or in lockdown. They say there are no atheists in foxholes; we might also say there are no ethicists in fallout shelters. If AI is akin to nuclear war, though, we are closer to the formation of the Manhattan Project than we are to the Cold War. We don’t need to be hunkering down as if the danger is here and there is nothing we can do about it but “mitigate it”. There's still time to decide what this technology looks like, how powerful it is and who will be at the sharp end of that power. Statements like this are a reflection of the fact that the systems we have today are a long way from those that we might have tomorrow: the work going on at the companies who warned us about these issues is vast, and could be much more transformative than chatting with a robot. It is all happening in secret, and shrouded in both mystery and marketing buzz, but what we can discern is that we might only be a few years away from systems that are both more powerful and more sinister. Already, the world is struggling to differentiate between fake images and real ones; soon, developments in AI could make it very difficult to find the difference between fake people and real ones. At least according to some in the industry, AI is set to develop at such a pace that it might only be a few years before those warnings are less abstractly worrying and more concretely terrifying. The statement is correct in identifying those risks, and urging work to avoid them. But it is more than a little helpful to the companies that signed it in making those risks seem inevitable and naturally occurring, as if they are not choosing to build and profit from the technology they are so worried about. It is those companies, not artificial intelligence, that have the power to decide what that future looks like – and whether it will include our "extinction". Read More Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul
2023-05-31 18:58
Amouranth slams scalper for selling her adult toy over four times the original price
Amouranth slams scalper for selling her adult toy over four times the original price
Amouranth also declared that the toy has sold out and that there may not be a second run
2023-08-13 12:53
EU study slams big tech firms over Russian disinformation
EU study slams big tech firms over Russian disinformation
Tech titans, including TikTok and Twitter, failed to effectively tackle Russian disinformation online during the first year of the war in Ukraine, according to a...
2023-08-30 22:47
NBA 2K24 Mamba Moments: How to Complete, Rewards
NBA 2K24 Mamba Moments: How to Complete, Rewards
To complete NBA 2K24's Mamba Moments, players must recreate iconic games from Kobe Bryant's career to earn free MyTEAM and MyCAREER rewards.
2023-09-09 01:57
South Africa Beats Climate Goal as Blackouts Slash Emissions
South Africa Beats Climate Goal as Blackouts Slash Emissions
South Africa is ahead of its target for cutting emissions of greenhouse gases. Output of the climate-warming gases
2023-05-15 22:54
Kai Cenat and IShowSpeed's first Rumble episode's reviews out: 'Will only get better from here'
Kai Cenat and IShowSpeed's first Rumble episode's reviews out: 'Will only get better from here'
Fans are very excited to see Kai Cenat and IShowSpeed collaborate on the show
2023-05-28 18:46
Cloud Range Appoints Cybersecurity Leader Galina Antova to Board of Directors
Cloud Range Appoints Cybersecurity Leader Galina Antova to Board of Directors
NASHVILLE, Tenn.--(BUSINESS WIRE)--Jul 18, 2023--
2023-07-18 18:20