Unknown threat actors are leveraging generative AI to create convincing phishing websites, according to new research from Okta Threat Intelligence. The attackers have been observed weaponizing v0, a tool developed by Vercel that allows users to build landing pages and web apps from simple natural language prompts.
The malicious actors used v0.dev to create fake login pages mimicking legitimate brands, including at least one customer of Okta. The attackers also hosted company logos and assets on Vercel’s infrastructure to lend further credibility to the scams. Vercel has since blocked access to the identified phishing sites following responsible disclosure.
Unlike traditional phishing kits that require some level of technical skill, tools like v0 and similar open-source clones available on GitHub enable attackers to generate fake web pages quickly and with minimal effort. This makes it easier for low-skilled cybercriminals to launch convincing phishing campaigns at scale.
Cybercriminals are increasingly leveraging artificial intelligence, particularly large language models (LLMs), to enhance their hacking activities. Some use uncensored or custom-built LLMs for illicit purposes, while others exploit legitimate AI tools through jailbreaking techniques, Cisco Talos has warned in a recent report.
The malicious AI systems are often connected to external tools for tasks like sending spam emails, scanning for vulnerabilities, and verifying stolen credit card data, making cybercrime more efficient and harder to detect.
Cybercrooks are using generative AI tools not just to write phishing emails, but to build entire attack infrastructures by leveraging uncensored large language models, such as WhiteRabbitNeo, which openly markets itself as a tool for offensive cybersecurity tasks.