
Image by charlesdeluvio, from Unsplash
New AI Code Vulnerability Exposes Millions to Potential Cyberattacks
Researchers at Pillar Security have uncovered a significant vulnerability in GitHub Copilot and Cursor, two widely used AI-powered coding assistants.
In a rush? Here are the quick facts:
- Hackers can exploit AI coding assistants by injecting hidden instructions into rule files.
- The attack uses hidden Unicode characters to trick AI into generating compromised code.
- Once infected, rule files spread vulnerabilities across projects and survive software updates.
Dubbed the “Rules File Backdoor,” this new attack method allows hackers to embed hidden malicious instructions into configuration files, tricking AI into generating compromised code that can bypass standard security checks.
Unlike traditional attacks that exploit known software vulnerabilities, this technique manipulates the AI itself, making it an unwitting tool for cybercriminals. “This attack remains virtually invisible to developers and security teams,” warned Pillar Security researchers.
Pillar reports that generative AI coding tools have become essential for developers, with a 2024 GitHub survey revealing that 97% of enterprise developers rely on them.
As these tools shape software development, they also create new security risks. Hackers can now exploit how AI assistants interpret rule files—text-based configuration files used to guide AI coding behavior.
These rule files, often shared publicly or stored in open-source repositories, are usually trusted without scrutiny. Attackers can inject hidden Unicode characters or subtle prompts into these files, influencing AI-generated code in ways that developers may never detect.
Once introduced, these malicious instructions persist across projects, silently spreading security vulnerabilities. Pillar Security demonstrated how a simple rule file could be manipulated to inject malicious code.
By using invisible Unicode characters and linguistic tricks, attackers can direct AI assistants to generate code containing hidden vulnerabilities—such as scripts that leak sensitive data or bypass authentication mechanisms. Worse, the AI never alerts the developer about these modifications.
“This attack works across different AI coding assistants, suggesting a systemic vulnerability,” the researchers noted. Once a compromised rule file is adopted, every subsequent AI-generated code session in that project becomes a potential security risk.
This vulnerability has far-reaching consequences, as poisoned rule files can spread through various channels. Open-source repositories pose a significant risk, as unsuspecting developers may download pre-made rule files without realizing they are compromised.
Developer communities also become a vector for distribution when malicious actors share seemingly helpful configurations that contain hidden threats. Additionally, project templates used to set up new software can unknowingly carry these exploits, embedding vulnerabilities from the start.
Pillar Security disclosed the issue to both Cursor and GitHub in February and March 2025. However, both companies placed the responsibility on users. GitHub responded that developers are responsible for reviewing AI-generated suggestions, while Cursor stated that the risk falls on users managing their rule files.
Leave a Comment
Cancel