ChatGPT made a splash with its user-friendly interface and believable AI-generated responses. With a single prompt, ChatGPT provided detailed answers that other AI assistants had not achieved. Powered by a massive dataset that ChatGPT had been trained on, the breadth and variety of topics it could address quickly amazed the tech industry and the public.
However the technology sophistication raises inevitable question: what are the drawbacks of ChatGPT and similar technologies? With capabilities to generate a multitude of realistic responses, ChatGPT could be used to create a host of responses capable of tricking an unassuming reader into thinking a real human is behind the content.
Understanding ChatGPT
When you think of AI assistants, Google and Alexa may come to mind first. You ask a question like “What’s the weather today?” and get a brief answer. Although some conversational features have been available, most don’t produce long-form AI-generated text that can be expanded upon through further interaction.
ChatGPT takes a complex prompt and generates a full response, potentially spanning multiple paragraphs. It remembers the prior conversation and builds on it when asked further questions, providing more detailed responses.
An example is shown below: ChatGPT lists instructions to bake a chocolate cake.
ChatGPT providing instructions on baking a cake
Source: Specops
This conversation can range from a simple ingredient list to a full-fledged short story. However, ChatGPT may not respond to certain topics due to its dataset.
Additionally, it is important to note that just because a response is generated does not mean it is accurate. ChatGPT does not “think” like us; it uses data from the internet to generate a response. As a result, incorrect data and associated biases may lead to an unexpected answer.
The Risks of AI Assistants
As alluded to above, not all responses are equal. But that is not the only risk. ChatGPT responses can be convincing, and thus, the potential to generate content that could deceive an individual into believing it was written by a person exists.
This may lead to the weaponization of such technology in the tech world.
Many phishing emails are easily recognizable, particularly when written by non-native speakers. However, ChatGPT could make the task significantly easier and more convincing as noted in this article by CheckPoint.
The speed of generation and response quality opens the door to much more believable phishing emails and even simple exploit code generation.
Furthermore, since the ChatGPT model is open-source, an enterprising individual could create a dataset of existing company-generated emails to create a tool that quickly and easily produces phishing emails.
ChatGPT writing a password reset request from Microsoft
Source: Specops
Phishing emails can provide threat actors with a way to gain access to your system and deploy malware or ransomware, potentially causing serious damage. As ChatGPT continues to improve, it will become an increasingly powerful tool for malicious actors.
What about a non-native speaker attempting to convince an employee, through conversation, to give up their credentials?
Though the existing ChatGPT interface has protections against requesting sensitive information, you can see how the AI model helps to inform the flow and dialog that a Microsoft helpdesk technician may use, lending authenticity to a request.
Impersonating Microsoft Help Desk
Source: Specops
Protecting Against ChatGPT Abuse
Have you received a suspicious email and wondered if it was written entirely or partially by ChatGPT? Fortunately, there are tools that can help. These tools work on percentages, so they can’t provide absolute certainty. However, they can raise reasonable doubt and help you ask the right questions to determine if the email is genuine.
For example, after pasting in the content of the previously created password reset email, the tool GPT Zero highlights the sections that may be generated by an AI. This is not a foolproof detection, but it may give you pause and question the authenticity of a given piece of content.
Detecting AI-generated content
Source: Specops
You can use this tool to help identify AI generated text: https://gptzero.me/.
Social Engineering on the rise with ChatGPT
From fake support requests, to caller ID spoofing, and now even scripting with ChatGPT. The internet is full of resources to help promote successful social engineering schemes. Threat actors are advancing social engineering attacks by combining multiple attack vectors together, using ChatGPT alongside other social engineering methods.
ChatGPT can help attackers better create a fake identity, making their attacks more likely to succeed.
The Future of ChatGPT and Securing Your Users
ChatGPT is a game-changer, providing an easy-to-use and powerful tool for AI-generated conversations. While there are numerous potential applications, organizations should be aware of how attackers can use this tool to improve their tactics, and the additional risks it can pose to their organization.
Source: BleepingComputer