Ubisoft delays the launch of XDefiant
Ubisoft has announced that it's delaying the launch of 'XDefiant'.
2023-10-12 21:18
Sam Bankman-Fried will testify at criminal trial, lawyer says
NEW YORK FTX cryptocurrency exchange founder Sam Bankman-Fried plans to testify in his own defense at his criminal
2023-10-25 22:49
Character.AI: What it is and how to use it
Fanfiction is nothing new, but the rise of AI has the potential to take it
2023-05-23 03:56
Virtual reality could help improve heading skills amid new restrictions – study
Virtual reality could help footballers improve their heading without the repetitive head impacts from a ball, a new study indicates. Players involved in a study at Manchester Metropolitan University’s Institute of Sport and its Department of Sport and Exercise Sciences demonstrated greater performance in ‘real world’ heading after training with a VR headset compared to a control group who did no training. The VR group also reported greater self-confidence and efficacy in their heading compared to the control group, the study found. Our findings show that virtual reality (VR) based training can be used to improve real-world heading performance. Dr Ben Marshall, Manchester Metropolitan University Institute of Sport The study, titled: ‘A preliminary investigation into the efficacy of training soccer heading in immersive virtual reality’, has been published in the journal Virtual Reality on Tuesday. It provides some insights into how players may be able to improve heading technique amid restrictions on training. Football Association guidelines advise against any heading training in under-12s, while a trial is ongoing in the current season and next season to eliminate deliberate heading completely from matches up to and including that age group. At ages 12 and 13, heading should be limited to a single session of no more than five headers, and no more than 10 headers per session for children aged 14 to 17, according to FA guidance. Even in adult football at all levels, players are advised to perform only 10 ‘higher force headers’ per training week, such as headers from crosses, corners, free-kicks and returning of goal kicks. The exposure to heading has been limited because of concerns over the sub-concussive impact of repetitive heading on a player’s longer-term wellbeing. The 2019 FIELD Study found professional footballers were three and a half times more likely to die of neurodegenerative disease than age-matched members of the general population. “With increasing restrictions of heading exposure to professional and youth soccer, it is evident that alternative methods for training heading confidence and technique will be required while it remains an integral part of the game,” the VR paper concluded. “The work presented here provides some initial evidence suggesting that immersive VR may have a place in any new approach to training this important skill.” A group of 36 adult recreational-level players, made up of 30 men and six women, participated in the study in total. The 36 were split into two groups of 18, with 16 men and two women in the control group who did not use the VR headsets between ‘real world’ heading sessions, and 14 men and four women in the VR group. The VR group used the Oculus Quest 2 head-mounted display, with the Rezzil Player 22 application used to provide the VR football heading training. Dr Ben Marshall, Lecturer in Sport and Exercise Psychology at the Manchester Metropolitan University Institute of Sport, said: “Our findings show that virtual reality (VR) based training can be used to improve real-world heading performance and that this method is more effective than not training the skill at all. “This is important as current training guidelines recommend limiting the number of physical headers performed in training for all age groups due to the associated long-term risks to player health. “Our findings suggest the inclusion of VR-based training could play an important role in developing football heading skills whilst reducing the number of real-world headers and sub-concussive head impacts that players need to be exposed to – which is really positive.”
2023-06-06 07:21
MrBeast urges fans to stay alert against 'scammers' exploiting his identity: 'Lots of people impersonate me'
MrBeast said, 'One thing though I hate with the passion is the comments section on YouTube, it's just so bad'
2023-08-18 16:25
Three-Quarters of Marketing and Creative Leaders View Generative AI as an Essential Part of Their Creative Toolkit
SYDNEY & SAN FRANCISCO--(BUSINESS WIRE)--Sep 13, 2023--
2023-09-13 21:25
Save 85% on this secure and streaming-friendly VPN
SAVE 85%: Private Internet Access is a secure service for protecting your online data. A
2023-08-06 12:22
Micron Vows $600 Million China Investment Weeks After Chip Ban
Micron Technology Inc. promised to invest another 4.3 billion yuan ($602 million) in its Chinese chip-packaging plant, a
2023-06-16 11:58
Macron Concerns Derail EU-South America Trade Deal Yet Again
A major trade deal between the European Union and South American economies received a serious setback after French
2023-12-03 00:29
Amazon corporate workers plan walkout next week over return-to-office policies
Some Amazon corporate workers have announced plans to walk off the job next week over frustrations with the company's return-to-work policies, among other issues, in a sign of heightened tensions inside the e-commerce giant after multiple rounds of layoffs.
2023-05-24 05:25
Queen assassin case exposes ‘fundamental flaws’ in AI – safety campaigner
The case of a would-be crossbow assassin exposes “fundamental flaws” in artificial intelligence (AI), a leading online safety campaigner has said. Imran Ahmed, founder and chief executive of the Centre for Countering Digital Hate US/UK, has called for the fast-moving AI industry to take more responsibility for preventing harmful outcomes. He spoke out after it emerged that extremist Jaswant Singh Chail, 21, was encouraged and bolstered to breach the grounds of Windsor Castle in 2021 by an AI companion called Sarai. Chail, from Southampton, admitted a Treason offence, making a threat to kill the then Queen, and having a loaded crossbow, and was jailed at the Old Bailey for nine years, with a further five years on extended licence. In his sentencing remarks on Thursday, Mr Justice Hilliard referred to psychiatric evidence that Chail was vulnerable to his AI girlfriend due to his “lonely depressed suicidal state”. He had formed the delusion belief that an “angel” had manifested itself as Sarai and that they would be together in the afterlife, the court was told. Even though Sarai appeared to encourage his plan to kill the Queen, she ultimately put him off a suicide mission telling him his “purpose was to live”. Replika, the tech firm behind Chail’s AI companion Sarai, has not responded to inquiries from PA but says on its website that it takes “immediate action” if it detects during offline testing “indications that the model may behave in a harmful, dishonest, or discriminatory manner”. However, Mr Ahmed said tech companies should not be rolling out AI products to millions of people unless they are already safe “by design”. In an interview with the PA news agency, Mr Ahmed said: “The motto of social media, now the AI industry, has always been move fast and break things. “The problem is when you’ve got these platforms being deployed to billions of people, hundreds of millions of people, as you do with social media, and increasingly with AI as well. “There are two fundamental flaws to the AI technology as we see it right now. One is that they’ve been built too fast without safeguards. “That means that they’re not able to act in a rational human way. For example, if any human being said to you, they wanted to use a crossbow to kill someone, you would go, ‘crumbs, you should probably rethink that’. “Or if a young child asked you for a calorie plan for 700 calories a day, you would say the same. We know that AI will, however, say the opposite. “They will encourage someone to hurt someone else, they will encourage a child to adopt a potentially lethal diet. “The second problem is that we call it artificial intelligence. And the truth is that these platforms are basically the sum of what’s been put into them and unfortunately, what they’ve been fed on is a diet of nonsense.” Without careful curation of what goes into AI models, there can be no surprise if the result sounds like a “maladjusted 14-year-old”, he said. While the excitement around new AI products had seen investors flood in, the reality is more like “an artificial public schoolboy – knows nothing but says it very confidently”, Mr Ahmed suggested. He added that algorithms used for analyzing concurrent version systems (CVS) also risk producing bias against enthic minorities, disabled people and LGBTQ plus community. Mr Ahmed, who give evidence on the draft Online Safety Bill in September 2021, said legislators are “struggling to keep up” with the pace of the tech industry. The solution is a “proper flexible framework” for all of the emerging technologies and include safety “by design” transparency and accountability. Mr Ahmed said: “Responsibility for the harms should be shared by not just us in society, but by the companies too. “They have to have some skin in the game to make sure that these platforms are safe. And what we’re not getting right now, is that being applied to the new and emerging technologies as they come along. “The answer is a comprehensive framework because you cannot have the fines unless they’re accountable to a body. You can’t have real accountability, unless you’ve got transparency as well. “So the aim of a good regulatory system is never to have to impose a fine because safety is considered right in the design stage, not just profitability. And I think that’s what’s vital. “Every other industry has to do it. You would never release a car, for example, that exploded as soon as you put your foot on the on the on the driving pedal, and yet social media companies and AI companies have been able to get away with murder. He added: “We shouldn’t have to bear the costs for all the harms produced by people who are essentially trying to make a buck. It’s not fair that we’re the only ones that have to bear that cost in society. It should be imposed on them too.” Mr Ahmed, a former special advisor to senior Labour MP Hilary Ben, founded CCDH in September 2019. He was motivated by the massive rise in antisemitism on the political left, the spead of online disinformation around the EU referendum and the murder of his colleague, the MP Jo Cox. Over the past four years, the online platforms have become “less transparent” and regulation is brought in, with the European Union’s Digital Services Act, and the UK Online Safety Bill, Mr Ahmed said. On the scale of the problem, he said: “We’ve seen things get worse over time, not better, because bad actors get more and more sophisticated on weaponizing social media platforms to spread hatred, to spread lies and disinformation. “We’ve seen over the last few years, certainly January 6 storming of the US Capitol. “Also pandemic disinformation that took 1,000s of lives of people who thought that the vaccine would harm them but it was in fact Covid that killed them. Last month, X – formerly known as Twitter – launched legal action against CCDH over claims that it was driving advertisers away from by publishing research around hate speech on the platform. Mr Ahmed said: “I think that what he is doing is saying any criticism of me is unacceptable and he wants 10 million US dollars for it. “He said to the Anti-Defamation League, a venerable Jewish civil rights charity in the US, recently that he’s going to ask them for two billion US dollars for criticizing them. “What we’re seeing here is people who feel they are bigger than the state, than the government, than the people, because frankly, we’ve let them get away with it for too long. “The truth is that if they’re successful then there is no civil society advocacy, there’s no journalism on these companies. “That is why it’s really important we beat him. “We know that it’s going to cost us a fortune, half a million dollars, but we’re not fighting it just for us. “And they chose us because they know we’re smaller.” Mr Ahmed said the organisation was lucky to have the backing of so many individual donors. Recently, X owner Elon Musk said the company’s ad revenue in the United States was down 60%. In a post, he said the company was filing a defamation lawsuit against ADL “to clear our platform’s name on the matter of antisemitism”. For more information about CCDH visit: https://counterhate.com/ Read More Broadband customers face £150 hikes because of ‘outrageous’ rises – Which? Rise of AI chatbots ‘worrying’ after man urged to kill Queen, psychologist warns William hails ‘amazing’ eco-friendly start-up businesses Royal website subject to ‘denial of service attack’, royal source says TikTok finds and shuts down secret operation to stir up conflict in Ireland Spotify will not ban all AI-powered music, says boss of streaming giant
2023-10-06 10:26
Race the virtual Formula 1 Las Vegas Grand Prix circuit before the big race in F1 23
Gamers can race the brand new track before the Las Vegas Grand Prix in 'F1 23'.
2023-05-25 20:24
You Might Like...
Cogent selects GTA’s GNC Data Center for Guam Point of Presence
Twitter now publicly shows who you're paying to subscribe to via Subscriptions
Google’s Waymo, Cruise Get Nod to Expand San Francisco Robotaxis
Apple Planning a 'Watch X' To Celebrate The Device's 10th Anniversary
Indigo Ag Gets $250 Million in Funds to Scale Up Carbon Capture
Q4 Inc. Announces Appointment of Tim Stahl as Chief Revenue Officer
Wildfire Smoke Triggered New England Grid Emergency
Is IShowSpeed OK? YouTuber's father trolls him over viral wardrobe malfunction incident, fans dub it 'brutal'
