By now you probably have heard about this new AI technology, ChatGPT that is sweeping social media and was perhaps a topic of debate at this week’s Holiday parties.
GPT-3 [Generative Pretrained Transformer 3] is a state-of-the-art language processing AI model developed by the company OpenAI. This month they released their chatbot, ChatGPT to the public as a beta test. ChatGPT is an easy and user-friendly web app where users can type in a request, similar to a search engine and the application spits out a human-like text response. Requests range from “write me a song in the style of James Taylor” to “write me a book report for To Kill a Mockingbird” to “write me a sick note for my son’s school.” Really anything you can think of is possible, and ChatGPT will provide you with paragraphs of text in less than seconds.
Hard to imagine where this “beta” technology will be in the next release, but cybersecurity experts are trepidatious of what this could mean if it is utilized by the wrong people.
Cybersafe at FIT took Chat GPT for a test drive. We entered “write me a cyber security blog post.” Not only did the app write a 484 word entry in under one minute, but it outlined the post ahead of time to stay organized. Here is an excerpt:
Information security is the practice of protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction. In today’s digital age, where nearly everything is connected to the internet, it is more important than ever to prioritize information security. The full blog post is here.
What are the possible implications to cyber security?
While the ChatGPT technology is a powerful tool for automating conversations, it of course has pros and cons. On the negative side, it could exacerbate cyber threats:
It can write phishing emails without typos. Poor spelling and confusing grammar are among the most telling traits of phishing emails. AI-generated phishing emails will be more realistic, better written, and harder to filter for.
It can write Malware. Bleeping Computer tested this out and got inconsistent results. In some cases, it caught the question as a violation of its policies and in other cases it produced an answer. When ChatGPT does generate code or malware, it tends to be relatively simplistic or filled with bugs, stated SC Media. It’s important to note that since the release of ChatGPT, its developers have been fine-tuning their own ethics policies to make sure the new AI product is not used for malicious activity. Read the full Bleeping Computer article here.
On the positive side, it could provide better cyber security defense:
Since ChatGPT is open to both the good guys and the bad guys, no one has a leg up. But cyber security researchers have focused on its capabilities–working to detect vulnerabilities before a hacker can exploit them.
ChatGPT relies on what’s known as reinforcement learning. The more it interacts with humans, the more it learns. This is a technology that will be in constant flux and growth in the months and years to come. The good news is cyber security experts have a strong handle on how it can and could be used right now, and they will continue to do what they do best: defend against abuse of technology.