AI Coding Framework Hits 27K Stars by Deleting Bad Code
Superpowers enforces structured development workflows on AI coding agents—mandatory design phases, automatic test-driven development, and code deletion if tests don't exist first. Developers report 2+ hour autonomous sessions and output exceeding entire engineering teams, but installation bugs and framework conflicts suggest growing pains in a viral movement reshaping AI-assisted development.

A GitHub coding framework gained 1,406 stars in a single day by enforcing test-driven development: it deletes code written before tests exist.
Superpowers launched in October 2025 alongside Claude Code's new plugin marketplace, with two-command installation and no configuration files. By mid-November 2025, it had claimed the #1 spot on GitHub Trending with over 27,000 stars.
From Prompt Engineering to Enforced Methodology
AI coding agents skip requirements gathering, bypass testing, and produce inconsistent results. Developers using Claude Code reported agents that "jump straight to code without understanding requirements, skip testing, and produce inconsistent results".
Superpowers makes structured workflows mandatory. The framework enforces a sequence: design → plan → implement → test. Skills activate automatically based on task context.
The TDD skill has teeth. Write implementation code before tests, and the framework deletes it. Developers report 2+ hour coding sessions where Claude follows test-driven development, enforces architecture, and completes refactors without supervision.
The Oracle Engineer Who Outproduced His Entire Team
One developer claimed "My personal output now exceeds what my entire teams at Oracle Cloud Infrastructure could produce" after implementing Superpowers workflows.
A developer working on a large Python codebase shared by ~200 engineers reported being "absolutely blown away by the quality increase" when using Superpowers skills.
The framework uses persuasion principles (authority, commitment, scarcity, social proof) to influence LLM behavior under stress. Subagents start fresh for each task with clear instructions and specific goals, avoiding the accumulated pressure that degrades agent performance over extended sessions.
Infrastructure Limitations
The repository currently has 15 open bugs, including skills that don't auto-trigger, Windows freezes after installation, and directory space errors. Users also reported "superpower degradation since Anthropic Skills release", suggesting conflicts between Superpowers and Anthropic's native skills system.
Some HackerNews commenters remain skeptical: "I've tried some of those 'frameworks' for claude code, but it's difficult to measure any objective improvement." Others questioned whether "this concept of skills is that much better than having custom commands+sub-agents."
Simon Willison described it as one of the "most creative uses of coding agents" he knows. One HackerNews commenter noted: "I can't recommend this post strongly enough. The way Jesse is using these tools is wildly more ambitious than most other people."
What Engineering Teams Need to Know
The superpowers-marketplace contains 20+ battle-tested skills, with community-maintained repositories housing 100+ specialized subagents. Superpowers now supports OpenCode in addition to Claude Code and Codex.
As AI coding becomes standard, competitive advantage shifts to teams who master structured agentic workflows. The question isn't whether to use AI agents—it's whether you'll enforce discipline or continue supervising every decision.
obra/superpowers
An agentic skills framework & software development methodology that works.