Maintaining small codebases with spec-driven development

When I was hit with the sudden urge to build a small CLI in Go, I of course went directly to the Agent mode input box and began typing out what I wanted. At this point, I’d built enough trust in Continue to know that it could generate me a working proof-of-concept.
A few paragraphs later, I realized that I was afraid of losing this prompt—it already contained a lot of valuable intent. So I pasted it into a markdown file titled spec.md
and finished typing the directions.
Then, in a habitual flurry of keystrokes, I made a git commit. In this moment, some far phrase-coining corner of my brain beckoned me with the following: 🔮 Spec-Driven Development 🔮
The rest of this post is a very straightforward description of my experience generating and (re-generating) a project from a spec.
Part I: Minimum Viable Spec
As mentioned earlier, I began by typing in Continue’s input box, but decided to move over to a markdown file. When I made this transition, I also decided to fill out the rest of the spec with a language model. It started as basically a list of example CLI commands:
# Create .rules folder
rules init
# Download
rules add vercel/nextjs
# Create new rule
rules create
And turned into this full spec:
# Go CLI Specification: "Rules" Tool
## Overview
A command-line tool to create, manage, and convert rule sets for code guidance across different AI code assistant platforms. The tool allows for creating rule sets in different formats and locations, managing rules, and publishing them to a central registry.
## Technologies & Dependencies
1. **Language**: Go (1.20+)
2. **CLI Framework**: [Cobra](https://github.com/spf13/cobra) for command structure
3. **Configuration Management**: [Viper](https://github.com/spf13/viper) for configuration files
4. **File Operations**: Standard Go libraries (os, io/ioutil)
5. **JSON Handling**: encoding/json for parsing and writing rule configs
6. **Interactive Prompts**: [promptui](https://github.com/manifoldco/promptui) for interactive rule creation
7. **HTTP Client**: net/http for API calls to registry
## Project Structure
```
rules-cli/
├── cmd/
│ ├── root.go # Main command definition
│ ├── init.go # Init command
│ ├── create.go # Create command
│ ├── add.go # Add command
│ ├── remove.go # Remove command
├── internal/
│ ├── config/ # Configuration management
│ ├── formats/ # Format handling (cursor, default, etc)
│ ├── registry/ # Registry client
│ ├── ruleset/ # Rule set management
│ └── generators/ # Rule generators
├── main.go # Application entry point
├── go.mod # Go module definition
└── go.sum # Go module checksums
```
## Data Structures
### rules.json
The `rules.json` file goes in the root of the project, adjacent to the rules directory
```json
{
"name": "ruleset-name",
"description": "Description of the ruleset",
"author": "Author Name",
"license": "Apache-2.0",
"version": "1.0.0",
"rules": {
"redis": "0.0.1",
"workos/authkit-nextjs": "0.0.1"
}
}
```
### Rule file format (.md with front matter)
```md
---
# All of these fields are optional
description: Description of the rule
tags: [tag1, tag2]
globs: *.{jsx,tsx}
alwaysApply: false
---
This is the body of the rule. It supports Markdown syntax.
```
## Core Functionality
### 1. Rule Initialization (rules init)
- Creates initial rule directory structure
- Supports different formats via `--format` flag
- Default format creates `.rules/` directory
- Custom formats create `.{format}/rules/` directories
- Creates empty rules.json with basic structure
### 2. Rule Creation (rules create)
```bash
# Create new rules with interactive walkthrough that lets you choose triggers and write rules
rules create
rules create --tags frontend --globs *.{tsx,jsx} --description "Style guide for writing React components" "This is the body of the rule"
rules create --alwaysApply # Body not supplied, so will prompt for it interactively
```
- Interactive mode when parameters not supplied
- Supports flags for all rule properties (tags, globs, description, alwaysApply)
- Allows for stdin/editor input for rule body
- Creates a new rule (.md) file in the root of the rules directory
- Does not modify the rules.json file at all
### 3. Rule Importing (rules add)
```bash
rules add vercel/nextjs
```
- Adds the rule to rules.json "rules" object
- Downloads rule files from the registry to appropriate folder (e.g. `.rules/vercel/nextjs/`)
### 4. Rule Removal (rules remove)
```bash
rules remove vercel/nextjs
```
- Removes the rule from rules.json "rules" object
- Optionally deletes rule files from the local directory (with --delete flag)
- Provides confirmation prompt before deletion (can be bypassed with --force flag)
## Command Specifications
### `rules init`
- **Flags**:
- `--format string`: Set rule format (default, cursor, etc.)
- **Behavior**:
- Creates directory structure
- Initializes empty rules.json
- Sets up format-specific configuration
### `rules create`
- **Flags**:
- `--tags`: Comma-separated list of tags
- `--globs`: Glob patterns to match files
- `--description`: Short description
- `--alwaysApply`: Flag to always apply rule
- **Args**:
- Optional rule body as last argument
- **Behavior**:
- Prompts for missing fields if not provided
- Creates rule file in root of the rules directory
- Does not modify the rules.json file
### `rules add`
- **Args**:
- Name of ruleset to add
- **Behavior**:
- Fetches ruleset from registry
- Adds to rules.json "rules" object
- Validates ruleset exists
### `rules remove`
- **Args**:
- Name of ruleset to remove
- **Flags**:
- `--delete`: Also delete rule files from disk
- `--force`: Skip confirmation prompts
- **Behavior**:
- Removes rule reference from rules.json
- Optionally deletes rule files from disk
- Confirms before destructive operations
## Error Handling
- Clear error messages with actionable advice
- Validation checks before operations
- Proper exit codes for different error scenarios
## Configuration
- Uses Viper for configuration management
- Supports environment variables
- Configuration file stored in user's config directory
Obviously not perfect, but it was a great scaffold and only required a handful of manual edits by me. I then referenced the file in Continue and asked it to generate the project: “Can you please implement the following spec in this folder? @spec.md”
Once Continue started generating code, I soon realized it was going to take a while (this initial spec included all 8-10 commands I wanted in the CLI). So I cut this down to 2 representative commands to let Continue generate a full project more quickly.
Lessons
- Use an LLM to help generate your spec. It will type many of the obvious ideas faster than you can and include some details you hadn’t yet formalized.
- At first include only a minimal sample of functionality that is needed to judge the direction of generation
Part II: Building from source
With the minimum viable spec in place, Continue was now able to generate the entire project in less than 100 seconds.
On the first try, I was happy with the folder structure, but wanted to swap out a library, so I noted this at the beginning of the spec. After the second try, I noticed that it had misunderstood the location where I wanted the CLI to place a file, so I clarified this in the spec. The third version was great—it built and ran exactly as expected.
In this stage, you could just ask Continue to update the project with a quick prompt, but it’s surprisingly cheap to regenerate. Early changes are so foundational that you probably don’t want to keep around the bias of a previous attempt. Better to start from scratch without carrying over legacy code, potentially leading to a Frankenstein project.
Lessons
- Tweak until you are happy with the layout, style, tech stack, and basic behavior
- When you are making large structural changes, regenerate entirely from scratch!
Part III: Incremental Generation
Building from scratch is cheap, but not that cheap. Once the foundation was solid, it made sense to perform incremental updates. It was in this stage that I added back all of the functionality I had previously commented out.
The way I did this was:
- Make sure I have a clean working tree, nothing to commit
- Update the spec as desired
- Prompt Continue with “Please update the project based on the @diff changes to @spec.md”
- Review the new code
- Commit the spec and code changes together
Here’s an example of an early update: https://github.com/continuedev/rules/commit/46fa8c0048e9e0e21eef54c68da753b0b47f2b78
After a few iterations, I realized I was typing out a similar prompt repeatedly and decided to commit this to a prompt file as well:
I just updated @spec.md, which describes a project that lives in the current directory. Given the above @diff, please update the project so that it is up-to-date with the latest specification.
Lessons
- Don’t add functionality beyond the minimum viable spec until you are in the incremental generation stage
- Commit your spec file alongside the code changes so you can see the intent behind the changes
- Commit all of your prompts, even those used for updates. This is an important part of the “source code”.
Part IV: Challenges
The fact that I got this far without major blockers is exciting. Nothing of the sort could have been done even a year ago. I of course wonder how large a codebase this project would scale to. A primary challenge would be ensuring that Continue never misses a spot in the code. What makes me hopeful is the idea that this is a problem plaguing software engineers as well, and there are solutions: a) don’t write the kind of codebase where you have to update in two places to make a simple change (easier said than done), and b) if this happens, then document it. When possible, implicit knowledge should make it to the spec!
And some of the hurdles were more just opportunities to expand the workflow. For example, when I wanted to change code by hand, I worried about the spec getting out of sync. But this was quickly solved with a prompt:
I just made the update shown in the above @diff. Please modify @spec.md to align with the new changes such that it is still a complete description of the project.
A later struggle was keeping a clean spec. A single file only has so much carrying capacity before it becomes unwieldy, so eventually the need to refactor arose. Strangely, this felt just like programming: all of the principles of CLEAN code applied (Single Responsibility Principle, keeping code DRY, consistent naming conventions, etc).
Should we all switch to spec-driven development?
Probably not full time, but I would absolutely recommend the exercise for a side project, or if you’re adventurous a smaller main project. It was a lot of fun. Instead of reviewing code, I got to review changes to a natural language spec, which was pretty easy. And I only needed to write code when I wanted to, or when it was faster.
I don’t expect that this will be a manageable way of maintaining massive projects too soon, but I do believe it will be very helpful for prototyping or potentially even maintaining small, focused projects.