A team of researchers at the University of Texas at Arlington has created software to prevent artificial intelligence (AI) chatbots from generating phishing websites.
This development addresses a growing concern as cybercriminals increasingly use AI technology to design scams.
The software was developed by Shirin Nilizadeh, an assistant professor in the Department of Computer Science and Engineering, along with her doctoral students Sayak Saha Roy and Poojitha Thota.
Their tool enhances AI chatbots’ ability to detect and block user instructions that could be used to create phishing sites.
While AI chatbots like ChatGPT have some built-in detection capabilities, Nilizadeh’s team found that there are loopholes that can be exploited to create phishing attacks. With the rise of AI chatbots, creating online scams has become easier, even for those without technical skills, as AI can quickly build websites.
“These tools are very powerful, and we are showing how they can be misused by attackers,” Nilizadeh explained.
To create their software, the team first identified various prompts that could be used to create phishing websites, said Saha Roy. Using this information, they trained their software to recognize and respond to these specific keywords and patterns, improving its ability to detect and block malicious prompts.
The team’s work has gained significant attention in the cybersecurity industry, highlighted by their recent presentation at the IEEE Symposium on Security and Privacy (IEEE S&P 2024). In May, their research was not only shared with the community but also received the Distinguished Paper Award, emphasizing the importance of their findings.
“I want people to be receptive to our work and see the risk,” Saha Roy said. “It starts with the security community and trickles down from there.”
The researchers have reached out to major tech companies, including Google and OpenAI, to integrate their findings into broader AI security strategies. Both Saha Roy and Thota are committed to the impact their research has on cybersecurity.
“I’m really happy that I was able to work on this important research,” Thota said. “I’m also looking forward to sharing this work with our colleagues in the cybersecurity space and finding ways to further our work.”
This innovative software represents a significant step forward in protecting against AI-generated phishing scams, ensuring that AI technology is used responsibly and securely.