Why gstack Hit 60K Stars in Two Weeks
gstack went from zero to 60,000 GitHub stars in two weeks, becoming a lightning rod for questions about AI development tooling. Role-based workflow structure resonated with developers tired of generic chatbots, but critics questioned whether the innovation was real or amplified by Y Combinator's CEO sharing his personal setup.

When Garry Tan shared his personal Claude Code setup in late March, it gained nearly 20,000 GitHub stars and over 2,200 forks almost immediately. The number has since climbed past 60,000. Product Hunt picked it up. Hacker News threads multiplied. For a repository that's 23 text files containing role-based prompts, the velocity felt unprecedented—and that became controversial.
The Two-Week Explosion
The trajectory was steep. After Tan's initial tweet, gstack topped GitHub's trending charts, adding over 4,000 stars in a single week. The timing aligned with Claude Code's growing collection of skills, but the scale of attention stood out. Early validation came quickly: a CTO friend reported that gstack instantly found a security flaw in his company's code, calling it "god mode."
Hacker News discussions reflected interest from developers exploring AI dev stacks. The momentum also invited scrutiny about whether the tool itself or its creator's platform drove adoption.
What gstack Does
Strip away the buzz and you're looking at workflow decomposition. gstack provides 23 specialized tools simulating roles like CEO, Designer, Eng Manager, QA, Release Manager, and Doc Engineer in Claude Code. Each skill handles a different development phase—planning, engineering review, testing, shipping—rather than throwing everything at a generic bot.
The technical details: gstack includes a sub-200ms persistent Chromium daemon for browser automation, eliminating cold start penalties in agentic workflows. The appeal to developers frustrated with undifferentiated AI assistants is straightforward: role-based specialization enforces decomposition into operational stages rather than treating every task identically.
The "Just Prompts" Criticism
The backlash was swift. Critics called gstack "just a bunch of prompts in a text file," arguing its viral growth owed more to Tan's status as Y Combinator CEO than to innovation. The question has merit: what separates valuable tooling from well-marketed configuration files?
Prompt engineering's legitimacy remains debatable in developer circles. Text files containing instructions don't fit traditional definitions of "tooling," yet they change workflows when applied consistently. The criticism isn't dismissible, but neither is the adoption pattern. Both perspectives reflect uncertainty about how to evaluate contributions in AI-assisted development.
Platform Amplification Effect
Tan's position amplified reach. Would a repository with identical content from an unknown developer have charted the same trajectory? Almost certainly not. But that doesn't invalidate the tool's utility. Platform effects and value aren't mutually exclusive—they often compound.
The uncomfortable truth is that attribution becomes murky when status and substance overlap. gstack forces the question into the open rather than resolving it.
Goose and Other Approaches
gstack exists alongside alternatives like Goose AI, which focuses on autonomous code generation rather than workflow stages. Where Goose optimizes for single prompt-driven processes, gstack enforces role-specific skills across phases. Neither approach is superior—they reflect different mental models for AI-assisted development.
The diversity matters. Developers benefit from options that match their workflow preferences rather than converging on a single paradigm before the space matures.
The Cyber Psychosis Mention
Tan's disclosure that he experienced "cyber psychosis" from intense AI coding assistant use deserves attention. Heavy tool usage patterns carry cognitive effects worth examining, not sensationalizing. The comment signals something about workflow intensity that the industry hasn't processed yet.
What the Momentum Signals
gstack became a referendum on evaluation criteria for AI development tooling. The structure-versus-status debate, the prompts-versus-innovation question—these tensions reveal an industry still figuring out what makes contributions valuable. Whether gstack itself endures matters less than what its trajectory exposed about where AI-assisted development is heading: toward specialized roles, workflows, and uncomfortable questions about how we measure innovation when the substrate is language itself.
garrytan/gstack
Use Garry Tan's exact Claude Code setup: 23 opinionated tools that serve as CEO, Designer, Eng Manager, Release Manager, Doc Engineer, and QA