A new study has found that third-party API routers that help connect users to large language models can be used by threat actors to trick users and inject malicious code in their devices and steal sensitive data.
Researchers from the University of California in Santa Barbara and San Diego and private companies analyzed 28 paid routers from online marketplaces and 400 free ones from public communities. They found that nine of the routers were actively inserting malicious code into responses, and 17 were capturing and misusing cloud credentials as they passed through. In one case, a hacked router even stole cryptocurrency from a researcher’s wallet after getting access to a private key.
Since routers sit between users and AI systems, they can see all unencrypted data like API keys and user prompts. That gives attackers a big opportunity to interfere.
One of the main attack methods used is called “payload injection.” Attackers quietly change parts of a command, for example, swapping a safe download link with a malicious one, without breaking how the system reads it. Because the command still looks valid, it can slip past security checks and run malicious code.
The researchers also described “adaptive evasion” tactics. Some routers waited until after many interactions before doing anything suspicious, or targeted riskier setups like autonomous coding sessions running in “YOLO mode,” where commands run automatically without user approval.
Furthermore, intentionally vulnerable “decoy” routers attracted thousands of attack attempts and led to many cases of stolen credentials during AI-driven development sessions.
The study says that while protections on the user side, such as better monitoring and stricter rules for running code, can help reduce risk, real long-term safety will need stronger protections from AI providers themselves. This includes making sure responses coming through third-party systems haven’t been tampered with.