Fraudulent Versions of ChatGPT Come to the Dark Web

Old Banner

In our December 2022 issue, we discussed the emerging AI technology ChatGPT. Later in our July 2023 issue, we wrote about how cybercriminals were leveraging AI to circumvent phishing filters, write malware, or weaponize trusted AI technologies. This month we highlight how cybercriminals have further hijacked these rising technologies for no good on the dark web—a part of the internet that is not indexed by search engines and requires specialized software to access, where tracking user activity and geolocation is difficult.

Cybercriminals, lurking anonymously on the dark web, have created and are now using GPT-derived tools such as WormGPT and FraudGPT. WormGPT writes authentic-looking emails with the intention of launching a business email compromise within organizations. “It’s similar to ChatGPT but has no ethical boundaries or limitations,” wrote Daniel Kelley, a cybersecurity expert at SlashNext, and reformed blackhat. Read more about Kelley’s report here. By placing these tools on the dark web, it is impossible to track usage and will stay exempt from government regulations.

FraudGPT, most likely created by the same criminal group as WormGPT, is a bot that can use a brute-force attack to steal credit card numbers by testing possible combinations of the missing information. For example, if it has a 15- or 16-digit credit card but the owner’s zip code is missing from the stolen information, WormGPT will try thousands of different zip code combinations until it finds a match. Often, they will charge small amounts of money, so if successful, it will go undetected.

Recently, public generative-AI tools like OpenAI’s popular ChatGPT have implemented some safeguards to keep their products from being used for nefarious purposes. The U.S. government is also tracking the issue, and as recently as this month, the Biden administration announced eight technology companies—many leaders in AI development—signed on to a set of “voluntary commitments”  that put security first and promote responsible AI development.

What can you do to stay safe against  AI-related threats?

  • Continue to report potential security incidents by:
    • Emailing [email protected].
    • Open a ticket at techhelp.fitnyc.edu.
    • Report suspicious emails to Google.
  • At home, make sure you also have anti-virus software installed on all your computers and laptops and keep it up-to-date. Also, via your home router, log in to your WiFi settings and ensure that your security level is the highest available, which is commonly WPA-2.
     

  • Make all your passwords long, strong, and unique, and enroll in multifactor authentication on all of your online accounts, especially for your finances.
     

  • Closely monitor your financial accounts for anomalies and strange transactions. Consider setting up automated alerts for large transactions and small ones.

 
About Cybersafe

The Division of Information Technology is dedicated to protecting the FIT community from the latest cybersecurity threats by providing warnings and creating awareness through training and information-sharing. Visit fitnyc.edu/cybersafe for more information. And stay tuned for emails from [email protected] for the latest from the Cybersafe campaign at FIT.