Kai Cenat roasts random girl on TikTok by singing Nicki Minaj's track with Natalie Nunn reference
'She got a regular a** chin and she is beautiful,' a social media user said over Kai Cenat's remark
2023-07-30 19:50
Reddit blackout: Why are thousands of the world’s most popular subreddits going dark?
Most of Reddit has now gone “dark” in protest against the management of the online forum. The controversy began when Reddit announced that it would start charging for access to its API, the technology that allows other developers access to its data. Some of those developers immediately announced that the pricing was so high that it would make their apps unsustainable – and one, widely-respected client Apollo, has since said it will have to shut down. That set off outrage across Reddit. While that initially focused on the decision to start charging for access to its data, it has since grown, with many users suggesting that they are generally dissatisfied with the way the site is being managed. What has happened to Reddit? On June 12, many of the world’s biggest subreddits went “dark”. That meant setting their privacy settings to private, so that only anyone who is already a member can see them. For anyone who tries to visit those forums and is not a member – which includes most of those on Reddit, including many of its biggest – they will see a message that it has gone private and is therefore not available. In a widely circulated message explaining the outage, users explained that it was intended as a protest. Some will return on 14 June, after 48 hours of darkness, it says, but others might opt to never come back again if the problem is not addressed. That is because “many moderators aren’t able to put in the work they do with the poor tools available through the official app” the message reads. “This isn’t something any of us do lightly: we do what we do because we love Reddit, and we truly believe this change will make it impossible to keep doing what we love.” Why did Reddit change its policy? All of this began because Reddit announced that it would start charging for access to its API. Many of its users – including Christian Selig, the developer of the Apollo app that is at the centre of much of the controversy – say that this is reasonable. Reddit’s data is used by sites such as Google and to train artificial intelligence systems, for instance. And at the moment, Reddit is not paid for that usage, despite the fact that it costs the company (which is not profitable) to host that data. But it was the pricing and the way it was rolled out that caused such controversy. Mr Selig said that the pricing would cost his app $2 million per month, which is much more than storing the user data is thought to cost Reddit, and he and others were given only 30 days to respond. Which Reddit forums are part of the blackout? Almost all of them. The latest numbers suggest that 7259, out of 7806, of the site’s subreddits are currently unavailable to the public. Of the seven subreddits that have more than 30 million subscribers, all but one – r/pics – have been made private. A full, live list that shows both the subreddits that are down and the overall impact of the protest can be found on this tracking page. How can this happen? Reddit is unusual among social networks in that it depends heavily on its users, who administer the forums and moderate the content that appear on them. That saves it a lot of money – Meta, for instance, spends vast sums on ensuring that problematic content does not appear on Facebook and Instagram – and means that those users feel as if they should be listened to when it comes to such issues. It also means that they are able to take decisions that the management of Reddit might not like them to, including turning those subreddits private. Some 30,000 moderators are thought to be running the subreddits that are involved in the protest, and working together has given them considerable power to grind the site to a halt. Read More Reddit is in chaos – and it’s CEO has finally responded Reddit’s blackout protest is set to continue indefinitely Reddit down amid major protest Popular Reddit app Apollo shuts down as site’s users revolt against it Millions of Reddit users face a blackout over pricing revolt Scientists reveal the ‘violent, catastrophic’ origin of Geminids meteor shower
2023-06-16 00:25
UK Heads for Another Sweltering Summer Driven by Global Warming
This summer in the UK is expected to be hotter than normal, though temperatures aren’t forecast to break
2023-06-09 01:48
Quectel Launches Ultra-Compact FCM360W Wi-Fi 6 and Bluetooth 5.1 Module Ideal for Smart Homes and Industrial IoT Use Cases
VANCOUVER, British Columbia--(BUSINESS WIRE)--Jun 6, 2023--
2023-06-06 18:26
The Rock Confirms MW3 Operator is His Cousin
The Koa King Operator in Call of Duty: Modern Warfare 3 is The Rock's cousin, Ben, a former Navy SEAL. Here's how to get the Warrior Pack in MW3.
2023-11-15 00:22
China defends ban on US chipmaker Micron, accuses Washington of 'economic coercion'
The Chinese government has defended its ban on products from U.S. memory chipmaker Micron Technology Inc. in some computer systems after Washington expressed concern
2023-05-24 18:56
Neuralink’s test monkeys died due to brain implants contrary to Elon Musk’s claims, report suggests
Test monkeys at Elon Musk’s controversial biotech startup Neuralink died due to a number of complications from brain chip implant procedures, counter to the claims made by the multi-billionaire, a new report claimed. Nuralink has been developing chips to be implanted into the skull, claiming that such a computer-brain interface will help restore vision in the blind and paralysed people walk again. The company unveiled the working of its technology in monkey models in the past, including one demonstration of a nine-year-old macaque learning to play the 1970s classic video game Pong. However, the startup is also subjected to complaints by animal rights groups, including the Physicians Committee for Responsible Medicine (PCRM), which criticised the company’s “inadequate care” of its research monkeys a number of times in the past. In a post on X, the Tesla titan said earlier this month that “no monkey has died as a result of a Neuralink implant” in response to allegations that the neurotech firm was inflicting “extreme suffering” on its primate test subjects. “First our early implants, to minimise risk to healthy monkeys, we chose terminal monkeys (close to death already),” Mr Musk posted on X, the platform previously known as Twitter. In a presentation last year, the multibillionaire also claimed that Neuralink’s animal testing was never “exploratory” but was conducted to confirm scientific hypotheses. “We are extremely careful,” he said at the presentation. However, public documents obtained by PCRM – a nonprofit that advocates against using live animals in testing – present a different picture. The documents, reviewed by Wired, pointed out that a number of monkeys, on whom the implants were tested, were euthanised after suffering various complications, including “bloody diarrhea, partial paralysis, and cerebral edema”. One document reportedly noted that a male macaque was euthanised in March 2020 “after his cranial implant became loose” to the extent that they “could easily be lifted out”. A necropsy report of this monkey pointed out that “the failure of this implant can be considered purely mechanical and not exacerbated by infection”, which appeared to counter Mr Musk’s claim that no monkeys died due to Neuralink’s chips. Another primate, the report noted “began to press her head against the floor for no apparent reason” and lose coordination, with her condition deteriorating for months until she was finally euthanised. A necropsy report, cited by Wired, suggested that this animal was bleeding in her brain and that the neurotech firm’s implants left parts of her cerebral cortex brain region “focally tattered”. However, the company held that its “use of every animal was extensively planned and considered to balance scientific discovery with the ethical use of animals”. Neuralink did not immediately respond to The Independent’s request for comment. The latest report also comes as Neuralink announced on Wednesday that it has started human trials for people with quadriplegia after testing its implants on pigs and monkeys. “We’re excited to announce that recruitment is open for our first-in-human clinical trial,” the company posted on X. “If you have quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS), you may qualify,” it said. Read More Elon Musk recruiting humans to trial brain implant Starship rocket launch window revealed by FAA Elon Musk reveals trillion dollar algorithm that explains everything he does Elon Musk’s Neuralink recruiting humans to trial brain implant Elon Musk and the one trillion-dollar algorithm that explains everything he does Elon Musk says monkeys implanted with Neuralink brain chips were ‘close to death’
2023-09-21 15:23
Humanetics Rolls out New Customer Service Centers of Excellence Across Europe
FARMINGTON HILLS, Mich.--(BUSINESS WIRE)--May 24, 2023--
2023-05-24 15:22
Shift4 Selected as Official Payment Processor of the Cleveland Cavaliers
CLEVELAND--(BUSINESS WIRE)--Aug 24, 2023--
2023-08-24 20:25
Voices: The real reason companies are warning that AI is as bad as nuclear war
They are 22 words that could terrify those who read them, as brutal in their simplicity as they are general in their meaning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That is the statement from San Francisco-based non-profit the Center for AI Safety, and signed by chief executives from Google Deepmind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research. The fact that the statement has been signed by so many leading AI researchers and companies means that it should be heeded. But it also means that it should be robustly examined: why are they saying this, and why now? The answer might take some of the terror away (though not all of it). Writing a statement like this functions as something like a reverse marketing campaign: our products are so powerful and so new, it says, that they could wipe out the world. Most tech products just promise to change our lives; these ones could end it. And so what looks like a statement about danger is also one that highlights just how much Google, OpenAI and more think they have to offer. Warning that AI could be as terrible as pandemics also has the peculiar effect of making artificial intelligence's dangers seem as if they just arise naturally in the world, like the mutation of a virus. But every dangerous AI is the product of intentional choices by its developers – and in most cases, from the companies that have signed the new statement. Who is the statement for? Who are these companies talking to? After all, they are the ones who are creating the products that might extinguish life on Earth. It reads a little like being hectored by a burglar about your house’s locks not being good enough. None of this is to say that the warning is untrue, or shouldn't be heeded; the danger is very real indeed. But it does mean that we should ask a few more questions of those warning us about it, especially when they are conveniently the companies that created this ostensibly apocalyptic tech in the first place. AI doesn't feel so world-destroying yet. The statement's doomy words might come as some surprise to those who have used the more accessible AI systems, such as ChatGPT. Conversations with that chatbot and others can be funny, surprising, delightful and sometimes scary – but it's hard to see how what is mostly prattle and babble from a smart but stupid chatbot could destroy the world. They also might come as a surprise to those who have read about the many, very important ways that AI is already being used to help save us, not kill us. Only last week, scientists announced that they had used artificial intelligence to find new antibiotics that could kill off superbugs, and that is just the beginning. By focusing on the "risk of extinction" and the "societal-scale risk" posed by AI, however, its proponents are able to shift the focus away from both the weaknesses of actually existing AI and the ethical questions that surround it. The intensity of the statement, the reference to nuclear war and pandemics, make it feel like we are at a point equivalent with cowering in our bomb shelters or in lockdown. They say there are no atheists in foxholes; we might also say there are no ethicists in fallout shelters. If AI is akin to nuclear war, though, we are closer to the formation of the Manhattan Project than we are to the Cold War. We don’t need to be hunkering down as if the danger is here and there is nothing we can do about it but “mitigate it”. There's still time to decide what this technology looks like, how powerful it is and who will be at the sharp end of that power. Statements like this are a reflection of the fact that the systems we have today are a long way from those that we might have tomorrow: the work going on at the companies who warned us about these issues is vast, and could be much more transformative than chatting with a robot. It is all happening in secret, and shrouded in both mystery and marketing buzz, but what we can discern is that we might only be a few years away from systems that are both more powerful and more sinister. Already, the world is struggling to differentiate between fake images and real ones; soon, developments in AI could make it very difficult to find the difference between fake people and real ones. At least according to some in the industry, AI is set to develop at such a pace that it might only be a few years before those warnings are less abstractly worrying and more concretely terrifying. The statement is correct in identifying those risks, and urging work to avoid them. But it is more than a little helpful to the companies that signed it in making those risks seem inevitable and naturally occurring, as if they are not choosing to build and profit from the technology they are so worried about. It is those companies, not artificial intelligence, that have the power to decide what that future looks like – and whether it will include our "extinction". Read More Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul Opinion: Age gap relationships might seem wrong, but they work. Trust me Hands up if you trust Boris Johnson | Tom Peck Boris’s ‘ratty rat’ rage against Sunak could bring the Tories down | John Rentoul
2023-05-31 18:58
AstraZeneca advances UK clean heat and energy efficiencies with £100m commitment
CAMBRIDGE, United Kingdom--(BUSINESS WIRE)--Sep 14, 2023--
2023-09-14 14:20
Vodafone, Three to merge UK mobile phone operations to capitalize on 5G rollout
Two of the U.K.’s biggest mobile phone operators have agreed to merge their businesses to capitalize on the rollout of next-generation 5G wireless technology in the country
2023-06-14 20:21
You Might Like...
US Republican senators ask tech firms about content moderation in Israel-Hamas war
Microsoft Outlook users hit with Monday morning outage
Yellen China Visit Seeks to Create More Talks Amid Tensions
Who is Sykkuno? Will he be Pokimane's replacement after OTV departure?
Paige Spiranac's stunning photo in body-hugging golf outfit takes internet by storm
Phenom Ranked Among Inc. 5000’s Fastest-Growing Companies for Fourth Consecutive Year
Paige Spiranac's elegant white dress sets Internet on fire as she promotes her website
indie Semiconductor Acquires EXALOS AG
