No Comments

OpenAI Stop 20 Global Campaigns using AI to spread Cybercrime and Disinformation

 

On Wednesday, OpenAI announced it had successfully disrupted over 20 malicious operations and deceptive networks worldwide that had attempted to misuse its platform since the beginning of the year.

These activities included debugging malware, writing content for websites, generating biographies for social media, and creating AI-generated profile pictures for fake accounts on platforms like X.

“While threat actors continue to adapt and experiment with our models, there is no evidence indicating any significant breakthroughs in their ability to create new malware or gain viral traction,” the AI company stated.

OpenAI also noted that it halted operations generating social media content linked to elections in the U.S., Rwanda, and, to a lesser extent, India and the European Union. However, none of these networks managed to attract widespread engagement or build a lasting audience.

One highlighted operation involved an Israeli commercial company, STOIC (also known as Zero Zeno), which generated social media content about the Indian elections. This was previously reported by Meta and OpenAI in May.

Some of the cyber campaigns OpenAI detailed include:

  • SweetSpecter: A suspected China-based adversary using OpenAI’s models for reconnaissance, vulnerability research, scripting, and evasion techniques. The group also attempted, unsuccessfully, to spear-phish OpenAI employees with the SugarGh0st RAT.
  • Cyber Av3ngers: A group tied to the Iranian Islamic Revolutionary Guard Corps (IRGC), which used OpenAI’s models for researching programmable logic controllers.
  • Storm-0817: Another Iranian actor that leveraged OpenAI’s models to debug Android malware, scrape Instagram profiles using Selenium, and translate LinkedIn profiles into Persian.

OpenAI also took action against clusters of accounts, including those behind the influence campaigns A2Z and Stop News, which posted content in English and French across websites and social media. Notably, Stop News frequently used imagery generated by DALL·E, featuring vibrant, cartoon-like images to attract attention.

Additionally, networks such as Bet Bot and Corrupt Comment were discovered using OpenAI’s API to engage with users on X, leading them to gambling sites or generating comments for posting on the platform.

This disclosure follows OpenAI’s earlier ban of accounts tied to Storm-2035, an Iranian influence operation using ChatGPT to create content focusing on the upcoming U.S. presidential election.

“Typically, threat actors utilized our models in intermediate phases of their operations — after obtaining basic tools like internet access and social media accounts but before launching their ‘finished’ products, such as social media posts or malware,” Nimmo and Flossman wrote.

Despite the growing concern over the misuse of generative AI for fraud and deepfake operations, cybersecurity firm Sophos warned in a report published last week that this technology could also be exploited to spread highly targeted misinformation through personalized emails.

This involves leveraging AI models to fabricate political campaign websites, create AI-generated personas across various political ideologies, and craft emails that microtarget individuals based on campaign messaging. Such automation opens the door for misinformation to be disseminated on a larger scale.

“With minimal adjustments, users could produce anything from legitimate campaign materials to deliberate misinformation and harmful threats,” researchers Ben Gelman and Adarsh Kyadige explained.

They also noted, “It’s possible to link any political movement or candidate to policies they may not support. Such intentional misinformation could lead people to align with candidates they wouldn’t typically support or distance themselves from those they thought they agreed with.”

 


Source: TheHackerNews

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.