102k Stars: Developers Reverse-Engineer AI Coding Tools
A GitHub repository with 102k stars compiles extracted system prompts from major AI coding assistants, revealing a developer revolt against black-box tooling. The viral traction exposes tension between AI companies' proprietary moats and engineers' need for transparency in tools they depend on daily.

A GitHub repository compiling extracted system prompts from AI coding assistants hit 102,000 stars in two months. The repo contains over 30,000 lines of internal instructions, tool schemas, and constraints from Cursor, Devin, v0, Replit, and a dozen other platforms. With 25,000 forks, developers are downloading production AI logic to understand what they're trusting with their codebases.
The Repository That Became a Transparency Movement
The collection addresses the challenge of accessing hidden system instructions that govern how AI assistants behave inside IDEs. Unlike general prompt repositories, this one targets coding tools specifically, exposing the complete workflows these agents follow when generating code suggestions or debugging errors.
After trending from 56.4k to 102k+ stars since March 2025, the repository has become a reference for developers auditing AI tools in their workflows. The growth came from Hacker News discussions about prompt security, where engineers questioned whether they could trust systems whose instructions remained invisible.
Why Developers Are Reverse-Engineering Their Tools
The prompts appear in Cline project discussions as blueprints for building open-source alternatives. Developers in community forums like Pickaxe and Ottomator cite the repository when analyzing how tools like Cursor and Lovable structure their internal logic.
The use case is about trust. When an AI assistant autocompletes code in your IDE, you're executing instructions written by a black box. Engineers want to audit those instructions the same way they'd review any third-party dependency. The extracted prompts function as unofficial documentation for systems that don't publish their own.
The Irony: ZeroLeaks Selling Solutions to Its Own Evidence
The repository creator operates ZeroLeaks, a service for auditing prompt vulnerabilities. The business model creates circular logic: expose prompts to demonstrate the problem, then sell protection against exposed prompts. The pitch works because the repository itself proves that system instructions can become attack vectors—if someone extracts the constraints an AI follows, they can craft inputs that bypass safety checks.
AI startups face a real problem. The security concerns around exposed instructions aren't theoretical. When your product's core logic lives in a prompt that users can extract, you're defending a moat made of client-side data.
Leaked vs. Extracted: The Semantic Battle
Terminology matters. Describing prompts as "leaked" implies unauthorized access, while "extracted" suggests they were accessible by design. The repository's name uses "leaked," framing the content as exposés rather than documentation. Competing repositories like asgeirtj/system_prompts_leaks follow the same convention, though this one differs by focusing on coding AI tools specifically.
Whether extraction methods violate terms of service depends on how prompts were obtained. If they're visible in browser developer tools or API responses, they're user-facing data. If they required circumventing technical protections, the legal landscape shifts.
The Trade Secret Question Developers Are Asking
AI companies treat system prompts as proprietary intellectual property—the secret sauce that differentiates their autocomplete from competitors. Developers counter that instructions governing code execution should be transparent, not hidden behind trade secret claims.
The 102,000 stars represent a vote. Engineers are signaling that opacity isn't acceptable when the tool has write access to production repositories. The question isn't whether prompts provide competitive advantage—they do. It's whether that advantage should override the user's right to audit what they're shipping code with.
Should system prompts be trade secrets or user-facing documentation? The repository's momentum suggests developers have already decided.
x1xhlol/system-prompts-and-models-of-ai-tools
FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models