Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》
X is letting paid users hide their likes
X is letting paid users hide their likes
It's the end of an era for X, the app formerly known as Twitter. Users
2023-09-01 00:53
Kelis has no interest in addressing that Bill Murray dating speculation
Kelis has no interest in addressing that Bill Murray dating speculation
Kelis can't be bothered with speculation about her dating life.
2023-06-12 01:49
Microsoft, Amazon facing UK antitrust probe over cloud services
Microsoft, Amazon facing UK antitrust probe over cloud services
Microsoft and Amazon could be in hot water over apparently making it difficult for UK customers to use multiple suppliers of vital cloud services.
2023-10-05 18:45
Meta launches AI-based video editing tools
Meta launches AI-based video editing tools
Meta Platforms on Thursday launched two new AI-based features for video editing that could be used for posting
2023-11-17 07:48
‘Hostile states using organised crime gangs as proxies in the UK’
‘Hostile states using organised crime gangs as proxies in the UK’
Hostile states are using organised crime gangs to carry out illegal activity in the UK, the head of the National Crime Agency has warned. NCA director-general Graeme Biggar highlighted “the emerging links between serious and organised crime and hostile states” in a speech outlining the agency’s annual assessment of crime threats to Britain. Speaking in Westminster, central London, on Monday, he said: “North Korea has for some time used cybercrime to steal funds and more recently cryptocurrency. “The Russian state has long tolerated and occasionally tasked the cybercrime groups on its territory, and had links with its oligarchs and their enablers. “And over the last year we have seen hostile states beginning to use organised crime groups – not always of the same nationality – as proxies. “It is a development we and our colleagues in MI5 and CT (counter-terrorism) policing are watching closely.” Mr Biggar said the biggest group of offenders in the UK is those who pose a sexual threat to children, estimated to be between 680,000 and 830,000 people – around 10 times the prison population. He warned that the availability of abuse images online has a radicalising effect by normalising paedophiles’ behaviour, and that viewing images, whether real or AI-generated, increases the risk of someone going on to abuse a child themselves. There are around 59,000 people involved in serious organised crime in the UK, with around £12 billion generated by criminal activities each year, and around £100 billion of dirty cash from across the globe laundered through the UK. Key threats to the UK include: – Criminals exploiting migrants travelling to the UK in small boats. The number of arrivals doubled to more than 45,000 in 2022, with gangs using “bigger, flimsier, single-use boats” and packing more people on to each craft, Mr Biggar said. – Illegal drug use that fuels a raft of other crimes including violence, theft, use of guns and modern slavery. Nearly 120 tonnes of cocaine and 40 tonnes of heroin are consumed in the UK every year, and NCA analysis of waste water suggests cocaine use is increasing by 25% in some areas. The agency wants to stop the use of synthetic opioids like fentanyl getting a hold here as they have done in the US. – Online fraud, which accounts for more than 40% of all crime. Mr Biggar said: “We assess that 75% of fraud is partially or fully committed from overseas. Generative AI is also being used to make frauds more believable, through the use of ever better deep fake videos and Chat GPT to write more compelling phishing emails.” Mr Biggar said developments in technology such as increased use of end-to-end encryption are making the agency’s work harder. He finished his speech by saying: “Law enforcement, including the NCA, needs to do more to be at the leading edge of new technology: this will require collective vision and sustained investment. “And, secondly, we need more effective strategic partnership from technology companies. “This is about responsible behaviour about designing public safety into their products alongside privacy, so that we all reap the benefits from technology, rather than suffering their consequences.” Read More Charity boss speaks out over ‘traumatic’ encounter with royal aide Ukraine war’s heaviest fight rages in east - follow live Kim Kardashian, Rylan Clark and Dalai Lama among those joining new app Threads Mastercard helping banks predict scams before money leaves customers’ accounts Art historian helps build new Assassin’s Creed game after son’s suggestion
2023-07-17 19:52
Lula Enlists Neighbors Into Brazil’s Battle to Save the Amazon
Lula Enlists Neighbors Into Brazil’s Battle to Save the Amazon
The leaders of South America’s Amazon nations will gather in Brazil this week as President Luiz Inacio Lula
2023-08-08 17:28
Meta backs down on Donald Trump Jr ‘misinformation’ warning
Meta backs down on Donald Trump Jr ‘misinformation’ warning
It didn’t take very long for conservatives to pounce on Meta’s new Twitter competitor and accuse it of censoring a prominent conservative, forcing the social media giant to back down. Last week, the New York Post reported that users of Instagram Threads — the upstart from Facebook’s parent company meant to take advantage of Twitter users’ discontent over the site’s Elon Musk-era problems — were offered a warning when they attempted to follow Donald Trump Jr, the eldest son of twice-impeached, twice-indicted ex-president Donald Trump. They were asked if they were “sure” they wanted to do so, and warned that the younger Mr Trump had “repeatedly posted false information that was reviewed by independent fact-checkers or went against our Community Guidelines”. The Trump Organization executive, who frequently posts false and inflammatory statements targeting prominent Democrats, posted a screen grab of the warning to Twitter on Thursday, around the time the new app went live. “Threads not exactly off to a great start,” he wrote. “Hey Instagram, threads is verbal, so the whole skimpy bikini thing is not going to work so well if your influencers can’t actually formulate a sentence… IMHO you may want to rethink cutting off those who can”. Meta communications boss Andy Stone responded that the warning “was an error and shouldn’t have happened”. “It’s been fixed,” he added. In response, Mr Trump replied: “Ok thanks I appreciate that”. The frustrated would-be poster’s father was banned from Instagram and Facebook for two years after he incited a deadly riot at the US Capitol on 6 January 2021. On that day, a mob of the defeated president’s supporters stormed the seat of the US legislature in hopes of preventing certification of President Joe Biden’s 2020 election victory. Read More Instagram Threads hits 100 million users, becoming easily the fastest growing app ever Twitter restores old, ‘better’ version of TweetDeck – but for how long? Account tracking Elon Musk’s jet is now on Threads after it was suspended from Twitter Elon Musk says ‘Zuck is cuck’ as Threads inches closer to 100m users
2023-07-11 00:18
Voices: The real reason companies are warning that AI is as bad as nuclear war
Voices: The real reason companies are warning that AI is as bad as nuclear war
They are 22 words that could terrify those who read them, as brutal in their simplicity as they are general in their meaning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That is the statement from San Francisco-based non-profit the Center for AI Safety, and signed by chief executives from Google Deepmind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research. The fact that the statement has been signed by so many leading AI researchers and companies means that it should be heeded. But it also means that it should be robustly examined: why are they saying this, and why now? The answer might take some of the terror away (though not all of it). Writing a statement like this functions as something like a reverse marketing campaign: our products are so powerful and so new, it says, that they could wipe out the world. Most tech products just promise to change our lives; these ones could end it. And so what looks like a statement about danger is also one that highlights just how much Google, OpenAI and more think they have to offer. Warning that AI could be as terrible as pandemics also has the peculiar effect of making artificial intelligence's dangers seem as if they just arise naturally in the world, like the mutation of a virus. But every dangerous AI is the product of intentional choices by its developers – and in most cases, from the companies that have signed the new statement. Who is the statement for? Who are these companies talking to? After all, they are the ones who are creating the products that might extinguish life on Earth. It reads a little like being hectored by a burglar about your house’s locks not being good enough. None of this is to say that the warning is untrue, or shouldn't be heeded; the danger is very real indeed. But it does mean that we should ask a few more questions of those warning us about it, especially when they are conveniently the companies that created this ostensibly apocalyptic tech in the first place. AI doesn't feel so world-destroying yet. The statement's doomy words might come as some surprise to those who have used the more accessible AI systems, such as ChatGPT. Conversations with that chatbot and others can be funny, surprising, delightful and sometimes scary – but it's hard to see how what is mostly prattle and babble from a smart but stupid chatbot could destroy the world. They also might come as a surprise to those who have read about the many, very important ways that AI is already being used to help save us, not kill us. Only last week, scientists announced that they had used artificial intelligence to find new antibiotics that could kill off superbugs, and that is just the beginning. By focusing on the "risk of extinction" and the "societal-scale risk" posed by AI, however, its proponents are able to shift the focus away from both the weaknesses of actually existing AI and the ethical questions that surround it. The intensity of the statement, the reference to nuclear war and pandemics, make it feel like we are at a point equivalent with cowering in our bomb shelters or in lockdown. They say there are no atheists in foxholes; we might also say there are no ethicists in fallout shelters. If AI is akin to nuclear war, though, we are closer to the formation of the Manhattan Project than we are to the Cold War. We don’t need to be hunkering down as if the danger is here and there is nothing we can do about it but “mitigate it”. There's still time to decide what this technology looks like, how powerful it is and who will be at the sharp end of that power. Statements like this are a reflection of the fact that the systems we have today are a long way from those that we might have tomorrow: the work going on at the companies who warned us about these issues is vast, and could be much more transformative than chatting with a robot. It is all happening in secret, and shrouded in both mystery and marketing buzz, but what we can discern is that we might only be a few years away from systems that are both more powerful and more sinister. Already, the world is struggling to differentiate between fake images and real ones; soon, developments in AI could make it very difficult to find the difference between fake people and real ones. At least according to some in the industry, AI is set to develop at such a pace that it might only be a few years before those warnings are less abstractly worrying and more concretely terrifying. The statement is correct in identifying those risks, and urging work to avoid them. But it is more than a little helpful to the companies that signed it in making those risks seem inevitable and naturally occurring, as if they are not choosing to build and profit from the technology they are so worried about. It is those companies, not artificial intelligence, that have the power to decide what that future looks like – and whether it will include our "extinction". Read More Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul
2023-05-31 18:58
10 ways you can support teachers this school year
10 ways you can support teachers this school year
As kids and educators settle into a new school year, a little bit of generosity
2023-09-08 17:29
AMCO Produce Chooses Sollum Technologies Dynamic LED Grow Lights
AMCO Produce Chooses Sollum Technologies Dynamic LED Grow Lights
MONTRÉAL--(BUSINESS WIRE)--Sep 5, 2023--
2023-09-05 18:15
Apple lost $200 billion in two days after reports of iPhone ban in China
Apple lost $200 billion in two days after reports of iPhone ban in China
Shares of Apple fell by 3.4% on Thursday following reports that China plans to expand a ban on the use of iPhones to government-backed agencies and companies.
2023-09-08 02:45
Montana Youth Climate Activists Get Historic Win in State Case
Montana Youth Climate Activists Get Historic Win in State Case
(Bloomberg Law) -- A state judge ruled Monday that Montana’s oil and gas policies are infringing on young people’s constitutional
2023-08-15 01:55