Liputan6.com, Jakarta – Technology companies in the world now have to face big challenges as cyber attacks increase. Now, cybercriminals are getting smarter about utilizing the latest technology from OpenAI, namely ChatGPT.
Armed with ChatGPT’s AI tool, cybercriminals can easily create malware, spread disinformation, evade detection, and launch spear-phishing attacks on more specific targets.
OpenAI has successfully thwarted more than 20 cybercriminal operations abusing ChatGPT.
First Activity Monitored?
Early signs of the use of AI ChatGPT for cybercrime were identified by Proofpoint last April, with suspicions that the program Rhadamanthys TA547, aka Scully Spider, was written using the AI technology.
Last month, researchers from HP Wolf discovered cybercriminals targeting users in France by using AI to script part of a series of multi-step attacks. This is one example of how AI technology can be used in malicious activities.
Quoting Bleeping ComputerMonday (14/10/2024), the first case of cyber crime utilizing AI was ‘SweetSpecter’ carried out by hackers based in China.
According to the Cisco Talos report, this group carried out spear-phishing attacks by sending malicious ZIP attachments disguised as support requests.
SweetSpecter uses ChatGPT accounts to perform script research and vulnerability analysis. Fortunately, OpenAI has blocked these accounts and shared related information with cybersecurity partners.
OpenAI reports SweetSpecter targeted them directly, sending spear phishing emails with malicious ZIP attachments disguised as support requests to OpenAI employees’ personal email addresses.
AI: A Double-Edged Sword
While AI technology like ChatGPT can simplify the malware development process, it also highlights how important oversight and regulation is in the use of generative technology. The company formed by Sam Altman is committed to continuing to improve security