Cyber Threat Actors Are Using ChatGPT to Code Deployable Malware
Hackers are using ChatGPT, the brainchild of Open AI to write malicious codes and deploy malware.
As per the reports, less experienced cybercriminals are utilizing ChatGPT to easily create malware strains that can be used for different cybercrimes. Hackers are also using this open-source AI app to create dark websites, steal personal files, obtain bank account credentials, and prepare other fraudulent schemes.
ChatGPT provides step-by-step instructions for hackers to replicate malware and ransomware strains. In a recent experiment, cybersecurity researchers ethically hacked a website in under 45 minutes using the hacking script generated using ChatGPT.
“Just as it can be used for good to assist developers in writing code for good, it can (and already has) been used for malicious purposes,” said Matt Psencik, the Director of Endpoint Security Specialist at Tanium.
“A couple of examples I’ve already seen are asking the bot to create convincing phishing emails or assist in reverse engineering code to find zero-day exploits that could be used maliciously instead of reporting them to a vendor,” he added.
Hackers are exploiting ChatGPT to create malicious scripts to perform cyber crimes. The files are then sold and shared on the dark web and other underground community forums.
When reporters asked ChatGPT personnel for clarification, they said — “Threat actors may use artificial intelligence and machine learning to carry out their malicious activities. Open AI is not responsible for any abuse of its technology by third parties.”
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system,” they added.
Leave a Comment
Cancel