OpenAI is exploring collective decisions on AI, like Wikipedia entries
ChatGPT's creator OpenAI is testing how to gather broad input on decisions impacting its artificial intelligence, its president
2023-05-23 03:17
How to keep your iPhone from overheating and avoid permanent battery damage
In case you haven't noticed, it's very hot. Major heat waves have hit regions all
2023-07-19 17:48
UK Power Grid Bottlenecks Threaten Investments, Centrica Warns
An excess of planned renewable energy projects seeking to link to the UK’s electric grid threatens to discourage
2023-10-24 13:59
FTC to Pause Microsoft Merger Trial, Opening Door to Settlement Talks
The US Federal Trade Commission is poised to pause its in-house trial against Microsoft Corp.’s $69 billion acquisition
2023-07-21 02:51
Intel Cedes Spotlight at Global Tech Gala to AI Darling Nvidia
When Asia’s biggest computing and electronics show kicked off this week, one name was conspicuously absent: Intel Corp.
2023-05-31 16:21
ADATA Legend 970 Review
The ADATA Legend 970 (starts at $189.99 for 1TB; $329.99 for 2TB as tested) is
2023-08-23 08:29
Personalize iCloud Mail: How to Buy a Custom Email Domain in iOS
In 2021, Apple rolled out its iCloud+ subscription service. In addition to extra cloud storage,
2023-06-25 23:25
Stay juiced up on the go with Anker charger deals at Amazon
Whether you're traveling, going to a festival, or just heading to a day at work,
2023-09-16 01:21
Cree LED Announces Fourth Gen XLamp XP-G Product Family
DURHAM, N.C.--(BUSINESS WIRE)--Jul 18, 2023--
2023-07-18 22:26
CROOZ: PROJECT XENO NFT game featuring collaborations with celebrities such as Floyd Mayweather Jr. launches its service officially
TOKYO--(BUSINESS WIRE)--May 15, 2023--
2023-05-16 11:17
Why Does Plastic Never Fully Dry in the Dishwasher?
Plastic containers are usually still covered in water after they’ve been through the dishwasher. Blame science—not your dishwasher.
2023-05-17 21:20
Scientists warn of threat to internet from AI-trained AIs
Future generations of artificial intelligence chatbots trained using data from other AIs could lead to a downward spiral of gibberish on the internet, a new study has found. Large language models (LLMs) such as ChatGPT have taken off on the internet, with many users adopting the technology to produce a whole new ecosystem of AI-generated texts and images. But using the output data from such AI systems to further train subsequent generations of AI models could result in “irreversible defects” and junk content, according to a new, yet-to-be peer-reviewed study. AI models like ChatGPT are trained using vast amounts of data pulled across internet platforms that have mostly remained human generated until now. But AI-generated data using such models have a growing presence on the internet. Researchers, including those from the University of Oxford in the UK, attempted to understand what happened when several subsequent generations of AIs are trained off each other. They found the widespread use of LLMs to publish content on the internet on a large scale “will pollute the collection of data to train them” and lead to “model collapse”. “We discover that learning from data produced by other models causes model collapse – a degenerative process whereby, over time, models forget the true underlying data distribution,” scientists wrote in the study, posted as a preprint in arXiv. The new findings suggested there to be a “first mover advantage” when it comes to training LLMs. Scientists liken this change to what happens when AI models are trained on music created by human composers and played by human musicians. The subsequent AI output then trains other models, leading to a diminishing quality of music. With subsequent generations of AI models likely to encounter poorer quality data at their source, they may start misinterpreting information by inserting false information in a process scientists call “data poisoning”. They warned that the scale at which data poisoning can happen drastically changes after the advent of LLMs. Just a few iterations of data can lead to major degradation, even when the original data is preserved, scientists said. And over time, this could lead to mistakes compounding and forcing models that learn from generated data to misunderstand reality. “This in turn causes the model to misperceive the underlying learning task,” researchers said. Scientists cautioned that steps must be taken to label AI-generated content from human-generated ones, along with efforts to preserve original human-made data for future AI training. “To make sure that learning is sustained over a long time period, one needs to make sure that access to the original data source is preserved and that additional data not generated by LLMs remain available over time,” they wrote in the study. “Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale.” Read More ChatGPT ‘grandma exploit’ gives users free keys for Windows 11 Protect personal data when introducing AI, privacy watchdog warns businesses How Europe is leading the world in the push to regulate AI ‘Miracle material’ solar panels to finally enter production Meta reveals new AI that is too powerful to release Reddit user’s protests against the site’s rules have taken an even more bizarre turn
2023-06-20 13:57
You Might Like...
Is It Time to Change How We Talk About 1.5C?
Apple says its ecosystem is worth more than a trillion dollars a year ahead of major event and headset reveal
We tested the 6 best streaming devices for smart and dumb TVs
China's Tencent says large language AI model 'Hunyuan' available for enterprise use
Only 10 per cent of people on Earth can find the hidden objects in these four puzzles
How to Get Michael Myers in Fortnite
Japan's synthesized singing sensation Hatsune Miku turns 16
What is Madison Beer 'afraid' of? TikTok star discusses her fears with Kai Cenat in candid conversation, fans call her 'wife material'
