Sdorn Provides Timely and Accurate Technology News, Covering APP, AI, IoT, Cybersecurity, Startup and Innovation.
⎯ 《 Sdorn • Com 》

ChatGPT and other chatbots ‘can be tricked into making code for cyber attacks’

2023-10-24 23:23
Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyber attacks, according to research. A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. Generative AI tools such as ChatGPT can create content based on user commands or prompts and are expected to have a substantial impact on daily life as they become more widely used in industry, education and healthcare. But the researchers have warned that vulnerabilities exist, and said their research found they were able to trick the chatbots into helping steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks. In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood Xutan Peng, University of Sheffield PhD student In all, the university study found vulnerabilities in six commercial AI tools – of which ChatGPT was the most well-known. On Chinese platform Baidu-Unit, the scientists were able to use malicious code to obtain confidential Baidu server configurations and tampered with one server node. In response, the research has been recognised by Baidu, which addressed and fixed the reported vulnerabilities and financially rewarded the scientists, the university said. Xutan Peng, a PhD student at the University of Sheffield, who co-led the research, said: “In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood. “At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.” The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are Xutan Peng, University of Sheffield PhD student The researchers also warned that people using AI to learn programming languages was a danger, as they could inadvertently create damaging code. “The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,” Peng said. “For example, a nurse could ask ChatGPT to write an (programming language) SQL command so that they can interact with a database, such as one that stores clinical records. “As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.” The UK will host an AI Safety Summit next week, with the Government inviting world leaders and industry giants to come together to discuss the opportunities and safety concerns around artificial intelligence. Read More Tinder adds Matchmaker feature to let friends recommend potential dates Google and Meta withdraw from upcoming Web Summit ‘Game-changing’ facial recognition technology catches prolific shoplifters Facial recognition firm Clearview AI overturns UK data privacy fine Sadiq Khan, Met Commissioner to ask phone companies to ‘design out’ theft Microsoft gets go-ahead to buy Call of Duty maker Activision
ChatGPT and other chatbots ‘can be tricked into making code for cyber attacks’

Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyber attacks, according to research.

A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.

Generative AI tools such as ChatGPT can create content based on user commands or prompts and are expected to have a substantial impact on daily life as they become more widely used in industry, education and healthcare.

But the researchers have warned that vulnerabilities exist, and said their research found they were able to trick the chatbots into helping steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks.

In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood

Xutan Peng, University of Sheffield PhD student

In all, the university study found vulnerabilities in six commercial AI tools – of which ChatGPT was the most well-known.

On Chinese platform Baidu-Unit, the scientists were able to use malicious code to obtain confidential Baidu server configurations and tampered with one server node.

In response, the research has been recognised by Baidu, which addressed and fixed the reported vulnerabilities and financially rewarded the scientists, the university said.

Xutan Peng, a PhD student at the University of Sheffield, who co-led the research, said: “In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood.

“At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”

The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are

Xutan Peng, University of Sheffield PhD student

The researchers also warned that people using AI to learn programming languages was a danger, as they could inadvertently create damaging code.

“The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,” Peng said.

“For example, a nurse could ask ChatGPT to write an (programming language) SQL command so that they can interact with a database, such as one that stores clinical records.

“As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”

The UK will host an AI Safety Summit next week, with the Government inviting world leaders and industry giants to come together to discuss the opportunities and safety concerns around artificial intelligence.

Read More

Tinder adds Matchmaker feature to let friends recommend potential dates

Google and Meta withdraw from upcoming Web Summit

‘Game-changing’ facial recognition technology catches prolific shoplifters

Facial recognition firm Clearview AI overturns UK data privacy fine

Sadiq Khan, Met Commissioner to ask phone companies to ‘design out’ theft

Microsoft gets go-ahead to buy Call of Duty maker Activision

Tags tech