← Back to blog
Standards-as-Code: What Vercel's 57 React Rules Teach Us About Enforcing Quality

Standards-as-Code: What Vercel's 57 React Rules Teach Us About Enforcing Quality

Chad MetcalfFebruary 24, 2026 · 3 min read

Vercel published 57 React performance rules. Not "write efficient components." Specific: "avoid barrel imports from lucide-react," "parallelize independent awaits with Promise.all," "use dynamic imports for components over 50KB." Tiered as CRITICAL, HIGH, and MEDIUM. Written at the level where a machine could check them.

Most teams that encounter rules like these do one of two things. They drop them in a wiki. Or they paste them into a coding agent's system prompt. Both depend on humans remembering to apply them.

The wiki fails silently. It's there. Nobody reads it. A new developer joins, ships three PRs with barrel imports, nobody catches it because nobody's looking for it during review.

The agent prompt has a subtler failure mode. Not everyone on the team uses the same coding tool. Not everyone uses the same config. The engineer who knows about the rule applies it when they use the agent. The engineer who doesn't, doesn't. There's no signal during review because the PR looks fine at first glance.

Both approaches treat standards as documentation. Documentation is advice.

The Right Level of Specificity

What makes Vercel's rules interesting isn't that they're comprehensive. It's that they're written at the machine-checkable level.

"Write performant code" is not checkable. It's an aspiration. "Avoid barrel imports from lucide-react" is checkable. Either the import exists or it doesn't. The check doesn't need to understand intent. It reads the file.

Most teams go wrong here: they write principles when they need rules. "APIs should be well-designed" doesn't become a check. "Every new API endpoint needs rate limiting" does.

Vercel did the hard work already, at least for React performance. They enumerated the violations, made them concrete, and tiered them by severity. The natural next step is treating those rules as checks, not guidelines.

Standards-as-Code

The pattern is the same as infrastructure-as-code: move the thing that existed in someone's head or a Google Doc into the repo, version-controlled, running automatically.

Each check is a markdown file in your repo. It describes a standard at the level of specificity where a machine can evaluate it. In Continue, checks run as full AI agents: they read files, run commands, and produce suggested diffs. The output appears as a native GitHub status check on every PR. It passes silently or fails with a fix already attached.

The key properties: it doesn't forget. It doesn't get tired on Friday afternoon. It runs on every PR regardless of who opened it or who's reviewing it. A new developer is held to the same standard as the senior engineer who wrote the rules.

Compare that to a code review comment. It exists in one PR, has to be re-noticed and re-enforced on every subsequent PR. That's not enforcement — it's advice with a short memory.

The Trust Arc

You don't encode all 57 rules on day one.

Start with the CRITICAL tier. Vercel's CRITICAL rules cover violations that cause real, measurable damage: things that spike bundle size, block rendering, or break server components in ways that hit users directly. That's 5 or 6 rules.

Write those as checks. Run them for a few sprints. Watch what they catch. You'll find PRs that would have slipped through because the reviewer was focused on architecture and didn't scan imports. That's the check doing its job.

Once the team trusts that the checks catch what they claim to catch without false positives, expand to HIGH. Then MEDIUM. If you start with all 57 rules, engineers are debugging false positives before the checks have proved their value. That's how standards-as-code dies before it ships.

What This Actually Changes

Vercel's 57 rules are published. Teams that read them and apply them manually will apply them inconsistently — humans don't maintain state the way machines do.

The hard-won knowledge in those rules was probably learned the painful way: bundle sizes that surprised teams, performance regressions that made it to production, reviewers catching the same class of mistake over and over until someone wrote it down.

Writing it down was the first step. Making it run on every PR is the second.

Start with one rule and watch it run. Fork the Vercel performance lab, create a PR, and see the checks evaluate React Server Component patterns on your code.