In December 2022 our CISO Update discussed the emerging AI technology, Chat GPT. Now only seven months later Chat GPT and dozens of other consumer AI apps are readily available and widely used. This game-changing technology has hit the consumer market for designers, writers, coders, and content creators with staggering speed. From early on, cybersecurity experts have been keeping a close eye on generative AI and how it could be utilized by the wrong people. This month’s update will bring you up to speed on how cyberactors are abusing AI to their advantage. AI Generated Phishing—No More Typos! “Phishing messages always have typos and grammar mistakes,” you say? Not anymore! Cyberactors can now craft realistic phishing messages by inserting someone’s personal or company data into the AI and asking it to create an email. We tested this ourselves with Bard. The resulting email was clear, with perfect spelling and grammar, and it even automatically inserted the name of a random staff member without us having to provide a name first. Read more about this here. BlackBamba Named after a venomous snake, BlackBamba is a new AI-powered malware strain. It can evade antivirus software on a computer by being fully polymorphic and changing every time the malware executes. This malware (which for now tends to be targeted at financial institutions in Europe) steals sensitive information using a keylogger and then uses a command-and-control server to report back the keylogger results to the cybercriminal. Read more about this here.
Weaponized Video AI Generative video AI applications like Filmora, Lucas, and Runway have been exploited by cybercriminals to deliver malware through trusted social media platforms like YouTube. Luring people in by offering a free version of Adobe Photoshop, Autodesk, and other products that are usually licensed and expensive, an unknowing victim then clicks on the link in the video description and an information stealer is installed on the victim’s computer. People are fooled by the realistic and professional-looking video that was generated by a video AI application. Read more about this here. What can you do to stay safe against these new and evolving AI-related threats? At FIT we use state-of-the-art intrusion-detection security solutions as well as a behavior-based antivirus solution for all FIT computing assets. At home, make sure you also have anti-virus software installed on all your computers and laptops and keep it up to date. Make sure you have multifactor authentication on all your online accounts, especially for your financials. Always inspect the sender’s email address on an email to check for inconsistencies or suspicious elements. Always refrain from clicking on unknown links, not only in emails but also on social media platforms. Never download or use pirated software. As the popularization of AI tools continues to grow, so will the capabilities of malicious actors to create different types of attacks. Staying up to date with these threats is important. Make sure to take the “FIT Is Cybersafe” online training, which is now available year round. If you did not have an opportunity to take it during the spring semester, it’s not too late. The training is 30 minutes long and you can stop and start the training as many times as you need until it is completed. This training is a great resource for learning about new and evolving threats. Log in to FIT Is Cybersafe training here. |