Jump to content

Chinese and Iranian hackers use ChatGPT and LLM tools to create malware and phishing attacks — OpenAI report has recorded over 20 cyberattacks created with ChatGPT


Recommended Posts

  • Author

Chinese and Iranian hackers use ChatGPT and LLM tools to create malware and phishing attacks — OpenAI report has recorded over 20 cyberattacks created with ChatGPT

Chinese and Iranian hackers use ChatGPT and LLM tools to create malware and phishing attacks — OpenAI report has recorded over 20 cyberattacks created with ChatGPT

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==

If there’s one sign that AI is more trouble than it is worth, OpenAI confirms that over twenty cyberattacks have occurred, all created via ChatGPT. The report confirms that generative AI was used to conduct spear-phishing attacks, debug and develop malware, and conduct other malicious activity.

The report confirms two cyberattacks using the generative AI ChatGPT. Cisco Talos reported the first in November 2024, which was used by Chinese threat actors who targeted Asian governments. This attack used a spear phishing method called ‘SweetSpecter,’ which includes a ZIP file with a malicious file that, if downloaded and opened, would create an infection chain on the user’s system. OpenAI discovered that SweetSpecter was created using multiple accounts that used ChatGPT to develop scripts and discover vulnerabilities using an LLM tool.

The second AI-enhanced cyberattack was from an Iran-based group called ‘CyberAv3ngers’ that used ChatGPT to exploit vulnerabilities and steal user passwords from macOS-based PCs. The third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop malware for Android. The malware stole contact lists, extracted call logs and browser history, got the device’s precise location, and accessed files on the infected devices.

All these attacks used existing methods to develop malware, and according to the report, there has been no indication that ChatGPT created substantially new malware. Regardless, it shows how easy it is for threat actors to trick generative AI services into creating malicious attack tools. It opens a new can of worms, showing it is easier for anyone with the required knowledge to trigger ChatGPT to make something with evil intent. While there are security researchers who discover such potential exploits to report and have them patched, attacks like this would create the need to discuss implementation limitations on generative AI.

As of now, OpenAI concludes that it will continue to improve its AI to prevent such methods from being used. In the meantime, it will work with internal safety and security teams. The company also said it will continue to share its findings with industry peers and the research community to prevent such a situation from happening.

Though this is happening with OpenAI, it would be counterproductive if major players with their own generative AI platforms did not use protection to avoid such attacks. However, knowing that it is challenging to prevent such attacks, respective AI companies need safeguards to prevent issues rather than cure them.



Source link

#Chinese #Iranian #hackers #ChatGPT #LLM #tools #create #malware #phishing #attacks #OpenAI #report #recorded #cyberattacks #created #ChatGPT

📬Pelican News

Source Link

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

Cookie Consent & Terms We use cookies to enhance your experience on our site. By continuing to browse our website, you agree to our use of cookies as outlined in our We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.. Please review our Terms of Use, Privacy Policy, and Guidelines for more information.