No Comments

Bugs in ChatGPT plugins open loophole for account control

 

An analysis of ChatGPT plugins by API security firm Salt Security found several types of vulnerabilities that could have been exploited by threat actors to obtain sensitive data and take over accounts on other sites.

ChatGPT plugins allow users to access up-to-date information (rather than the relatively old data the chatbot was trained on), as well as integrate ChatGPT with third-party services. For example, plugins can allow users to interact with their GitHub or Google Drive accounts. However, when a plugin is used, ChatGPT needs permission to send the user’s data to a website associated with the plugin, and the plugin may need access to the user’s account on the service they are interacting with.

The first vulnerability identified by Salt Security directly impacted ChatGPT and was related to OAuth authentication (open authorization). An attacker who managed to trick a person into clicking a specially crafted link could install a malicious plug-in with their own credentials on the victim’s account, and the victim would not need to confirm the installation. This would result in any message typed by the victim, including messages that may include credentials and other sensitive data, being sent to the plugin and implicitly to the attacker.

The second vulnerability was found in the AskTheCode plugin developed by PluginLab.AI, which allows users to interact with their GitHub repositories. The security flaw could allow an attacker to take control of a victim’s GitHub account and gain access to their code repositories through a zero-click exploit.

The third vulnerability was also related to OAuth and affected several plugins, but Salt demonstrated its findings in a plugin called Charts by Kesem AI. An attacker who managed to trick a user into clicking a specially crafted link could have taken control of the victim’s account associated with the plugin.

The vulnerabilities were reported to OpenAI, PluginLab.AI, and Kesem AI shortly after they were discovered in the summer of 2023 and the vendors released patches sometime in the following months.

Salt Security said it also found vulnerabilities in other GPTs and plans to detail them in an upcoming blog post.

 


Source: CisoAdvisor, Salt Security

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.