Breaking: Zero-click exploit in Orchids AI platform enables complete system takeover
Cybersecurity researcher Etizaz Mohsin has demonstrated a critical, unpatched vulnerability in the AI coding platform Orchids that allows attackers to fully compromise user systems without any victim interaction. The exploit, discovered in December 2025 and still unfixed as of February 2026, enables what's known as a 'zero-click attack' — where hackers can install malware, steal data, or access cameras and microphones without the user downloading anything or clicking any links.
Mohsin, a respected researcher with a track record of finding dangerous flaws including work on Pegasus spyware, gained access to a BBC reporter's test project by exploiting a security weakness in Orchids' architecture. He inserted a small line of code among thousands of lines automatically generated by the AI assistant, which then allowed him to change the desktop wallpaper and create a 'Joe is hacked' notepad file on the reporter's machine. The platform claims one million users and adoption by top companies including Google, Uber, and Amazon, making the potential attack surface enormous.
Orchids represents a new category of 'vibe-coding' tools where users without technical skills can build apps and games by typing text prompts into a chatbot. The AI assistant automatically writes and executes code with deep system access to carry out tasks autonomously. This fundamental shift in how developers interact with tools has created security vulnerabilities that didn't previously exist, according to Mohsin. The researcher spent weeks trying to contact the company through email, LinkedIn, and Discord with around a dozen messages before receiving a response this week where the team claimed they 'possibly missed' his warnings as they are 'overwhelmed with inbound' messages.
The San Francisco-based company was founded in 2025 and has fewer than 10 employees, raising questions about security maturity at rapidly scaling AI startups. While Mohsin hasn't yet found similar flaws in competitors like Claude Code, Cursor, Windsurf, and Lovable, experts warn this should serve as a warning for the entire AI agent ecosystem. The incident demonstrates that without proper discipline, documentation, and code review — hallmarks of traditional software development — AI-generated code often fails under attack.