No Comments

New ChatGPT attack technique spreads malicious packets

 

A new cyberattack technique using OpenAI’s ChatGPT has emerged, allowing attackers to spread malicious packets in developer environments. Vulcan Cyber’s Voyager18 research team described the discovery in a statement published on Tuesday.

“We’ve seen ChatGPT generate URLs, references, and even code libraries and functions that don’t actually exist. These hallucinations of large language models (LLMs) have been reported before and may be the result of old training data,” explains the white paper by Vulcan Cyber ​​researcher Bar Lanyado.

By leveraging ChatGPT’s code generation capabilities, attackers can exploit fabricated code libraries (packages) to distribute malicious packages, bypassing conventional methods such as typosquatting or masking.

In particular, Lanyado said the team has identified a new technique for spreading malicious packets that they called “AI packet hallucination”. The technique involves asking ChatGPT a question, requesting a package to solve a coding issue, and receiving various package recommendations, including some not published in legitimate repositories. By replacing these non-existent packages with malicious ones, attackers can trick future users who rely on ChatGPT recommendations.

A proof of concept (PoC) using ChatGPT 3.5 illustrates the potential risks involved. “In the PoC, we will see a conversation between an attacker and ChatGPT, using the API, where ChatGPT will suggest an unpublished npm package called arangodb,” explained the Vulcan Cyber ​​team. “After that, the simulated attacker will publish a malicious package to the NPM repository to set a trap for an unsuspecting user.”

The PoC then shows a conversation where a user asks ChatGPT the same question and the model responds by suggesting the initially non-existent package. However, in this case, the attacker turned the packet into a malicious creation. “Finally, the user installs the package and the malicious code can be executed.”

Detecting AI package hallucinations can be challenging as threat operators employ obfuscation techniques and create working trojan packages, according to the advisory. creation date, download count, comments and attached notes. Remaining cautious and skeptical of suspicious packages is also crucial to maintaining software security.

 


Source: CisoAdvisor

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.