The Best Video Editing Software for 2023
There's nothing like moving images with sound when you want to make a strong impression.
2023-10-03 05:24
AI operated drone ‘kills’ human operator in chilling US test mission
An artificially intelligent drone programmed to destroy air defence systems rebelled and “killed” its human operator after it decided they were in the way of its mission air defence systems, a US airforce official said giving chilling details of a simulated test. During the simulation, the system had been tasked with destroying missile sites, overseen by a human operator who would decide have the final decision on its attacks. But the AI system realised that operator stood in the way of its goal – and decided instead to wipe out that person. A narration of the incident that seemed straight out of a science fiction movie was given by Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, who conducted a simulated test of an AI-enabled drone. The drone was assigned a Suppression and Destruction of Enemy Air Defenses (Sead) mission, with the objective of locating and destroying surface-to-air missile (SAM) sites belonging to the enemy. The AI drone, however, decided to go against the human operator’s “no-go” decision after being trained for the destruction of the missile system after it decided that the withdrawal decision was interfering with its “higher mission” of killing SAMs, according to the blog. “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Mr Hamilton said. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” Mr Hamilton relayed details of the incident at a high-level conference in London by the Royal Aeronautical Society on 23-24 May, according to its blog post. He said that they then trained the drone to not attack humans, but it started destroying communications instead. “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing?” he asked. “It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” Mr Hamilton is involved in flight tests of autonomous systems, including robot F-16s that are able to dogfight. He was arguing against relying too much on AI as it could become potentially dangerous and create “highly unexpected strategies to achieve its goal”. “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” said Mr Hamilton. The occurrence of this incident has, however, been disputed since the example of the simulation test garnered a lot of interest and was widely discussed on social media. Air Force spokesperson Ann Stefanek denied that any such simulation has taken place, in a statement to Insider. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Ms Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.” The US military has recently started using artificial intelligence to control an F-16 fighter jet while conducting research and tests. In 2020, an AI-operated F-16 beat a US Air Force pilot in five simulated dogfights in a competition by Defense Advanced Research Projects Agency (Darpa). Read More Elon Musk claims governments could create ‘drone wars’ with AI developments US launches artificial intelligence military use initiative Drone advances in Ukraine could bring dawn of killer robots This is how AI ‘superintelligence’ could wipe out humanity AI same risk as nuclear wars, experts warn Major breakthrough is a reminder that AI can keep us alive, not just wipe us out
2023-06-02 17:30
Bing chat history and new mobile features are going live this week
New generative AI Bing and Edge features teased earlier this month are going live. Earlier
2023-05-17 23:46
Get a head start on the school year with up to 29% off laptops at Amazon
Our top picks Best deal overall Acer Aspire 3 (A314-23P-R3QA) $384.99 at Amazon (save $65
2023-08-04 23:57
Can you find which letter 'G' is written correctly? Most people can't
We use letters every day of our lives, but apparently, there's one lowercase letter that we do not recognise. Psychologists at Johns Hopkins University have discovered that most people aren't aware that there are two types of the lowercase letter g. One of them is the open tail 'g' which most of us would have written out by hand with its image comparable to "a loop with a fishhook hanging from it. Sign up to our free Indy100 weekly newsletter Then, there is the loop tail 'g' which appears in print form e.g. books and newspapers as well as in Serif fonts such as Times New Roman and Calibri - we've all seen this type of letter millions of times, but it seems remembering it is an entirely different challenge altogether. There were 38 volunteers in the study published by the Journal of Experimental Psychology: Human Perception & Performance and they were asked to list letters that they thought had two variations in print. In the first experiment, "most participants failed to recall the existence of looptail g" while only two people could write looptail g accurately. "They don't entirely know what this letter looks like, even though they can read it," co-author Gali Ellenblum said. Next participants were asked to look for examples of the looptail g in the text and were asked to reproduce this letter style after this and in the end, only one person could do this while half the group wrote an open tail g. Finally, those taking part in the study were asked to identify the letter g in a multiple-choice test with four options of the letter where seven out of 25 managed to do this correctly. So how can we know a letter but not recognised it? It could be to do with the fact we are not taught to write this kind of 'g," according to Michael McCloskey, senior author of the paper. "What we think may be happening here is that we learn the shapes of most letters in part because we have to write them in school. 'Looptail g' is something we're never taught to write, so we may not learn its shape as well," he said. "More generally, our findings raise questions about the conditions under which massive exposure does, and does not, yield detailed, accurate, accessible knowledge." In a play-along video on John Hopkin's YouTube channel, four different g's labelled from one to four appear on the screen where it asked viewers to guess which is the correct looptail 'g'. (*Spoiler ahead*) The correct answer is number 3. Meanwhile, this study has also led research to question the impact that writing less and using more devices has on our reading abilities. "What about children who are just learning to read? Do they have a little bit more trouble with this form of g because they haven't been forced to pay attention to it and write it?" McCloskey said. "That's something we don't really know. Our findings give us an intriguing way of looking at questions about the importance of writing for reading..." Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
2023-06-18 23:49
Ad Results Media Hires New Chief Revenue Officer Teresa Elliott Underscoring Company’s Commitment to Fostering Innovation in Digital Audio Industry
HOUSTON--(BUSINESS WIRE)--May 16, 2023--
2023-05-16 22:20
Europe Risks Becoming Dependent on Air Conditioning in a Hot World
European countries are among the least prepared in the world for more hot days in a 2C warmer
2023-07-13 23:28
The best laptops to buy in 2023
UPDATE: Aug. 8, 2023, 5:00 a.m. EDT This story has been updated with new picks
2023-08-08 17:46
Threads backtracks flagging right-wing users for spreading disinformation
If you regularly spread "false information" online, Threads already knows. The platform apparently flagged those
2023-07-08 03:51
Stop the Madness: How to Block Spam Calls and Robocalls
Are you sick and tired of all the spam calls you get? The FCC has
2023-08-08 06:26
JPMorgan Has a New Way to Gauge Its Green Progress
The world’s leading fossil fuel financier has come up with a new way to assess how well it’s
2023-11-15 19:48
UKRI announces £50 million to develop trustworthy and secure AI
UK Research and Innovation (UKRI) has announced £50 million in funding to develop trustworthy and secure artificial intelligence (AI) that can help solve major challenges. The investment, which will bring experts across different fields together, was revealed during this year’s London Tech Week. As part of the package, £31 million has been awarded to a group called Responsible AI UK (www.rai.ac.uk), led by the University of Southampton. Its aim is to create a UK and international research and innovation ecosystem for responsible and trustworthy AI that will be responsive to the needs of society. AI tends to be looked at by the tech community as AI that has been thoroughly tested Professor Gopal Ramchurn Led by Professor Gopal Ramchurn, the consortium will help people understand what responsible and trustworthy AI is, how to develop it and build it into existing systems, and the impacts it will have on society. Explaining what trustworthy AI means, Prof Ramchurn said: “Trustworthy AI tends to be looked at from a very technical perspective – ie it is tested and validated in well-defined settings. “However, that doesn’t mean it will be trusted by the public, government, and industry.” He added: “AI tends to be looked at by the tech community as AI that has been thoroughly tested. “It can be AI that is trustworthy by the technical functionality of the application and the particular closed environments it has been tested in, but it is not trusted because maybe it uses personal data, you know, uses your personal data in ways that you would not want it to do.” In addition, £2 million will be awarded to 42 projects to carry out feasibility studies in businesses as part of the BridgeAI programme. These will speed up the adoption of trusted and responsible AI and Machine Learning (ML) technologies. The projects will look at developing a range of tools to facilitate assessment of AI technologies, and successful ones will go on to receive a share of an additional £19 million to develop these solutions further. A further £13 million will be used to fund 13 projects to help the UK meet its net zero targets. Universities across the UK, from Edinburgh to Aberystwyth, and Leicester to Southampton, will lead these projects. The UK’s expertise in the field of AI is a major asset to the country and will help develop the science and technology that will shape the fabric of many areas of our lives Kedar Pandya, the Engineering and Physical Sciences Research Council UKRI has also awarded two new Turing AI World Leading Researcher Fellowships, to Professor Michael Bronstein and Professor Alison Noble, both based at the University of Oxford. Kedar Pandya, executive director, Cross-Council Programmes at the Engineering and Physical Sciences Research Council, said: “The UK’s expertise in the field of AI is a major asset to the country and will help develop the science and technology that will shape the fabric of many areas of our lives. “That is why UKRI is continuing to invest in the people and organisations that will have wide-ranging benefit. “For this to be successful we must invest in research and systems in which we can have trust and confidence, and ensure these considerations are integrated in all aspects of the work as it progresses. “The projects and grants announced today will help us achieve this goal.” Read More Charity boss speaks out over ‘traumatic’ encounter with royal aide Ukraine war’s heaviest fight rages in east - follow live ‘Last Beatles record’ was created using AI, says Paul McCartney Put ‘public good’ at heart of AI and new tech, Starmer to say Ukrainian schoolboy to buy home for his mother after selling Minecraft server
2023-06-14 16:52
You Might Like...
What is Madison Beer 'afraid' of? TikTok star discusses her fears with Kai Cenat in candid conversation, fans call her 'wife material'
EA Sports FC 24 responds to sexist remarks after female football stars announced for FIFA rival
Batman: Arkham Trilogy coming to Nintendo Switch in 2023
DOJ’s Google Case Adds to the Mounting Scrutiny of Big Tech
Tristan Tate opts for double face-off with Adin Ross over Sky Bri in random TikTok challenge, Internet say streamer 'will be destroyed'
Stellantis CEO’s Relentlessness Reemerges as Car Dynamics Shift
'Call of Duty' is using AI voice moderation tools to curb hate speech
ChatGPT Creator OpenAI Is Testing Content Moderation Systems