Discipline of AI-Assisted Development using AGENTS.md
When I open a new project and don't see an AGENTS.md or CLAUDE.md file, I start wondering if the team is using AI at all. And when I see a codebase full redundant logic, I wonder if they're using it too much without any guardrails.
Both scenarios miss the point of what AI can actually bring to development.
What These Files Are For
An AGENTS.md or CLAUDE.md file is a set of instructions that AI tools read when working on your project. Think of it as onboarding documentation, but for your AI assistant instead of a new hire.
These files typically include:
- Project architecture and conventions
- File organization patterns
- Naming standards
- Testing expectations
- What to avoid
Without this context, AI is guessing. It doesn't know your team prefers service classes over fat controllers. It doesn't know you have a shared utility for date formatting. It doesn't know the codebase already has a method that does exactly what it's about to create again.
The result is code that technically works but doesn't fit. Duplicate logic scattered across files. Inconsistent patterns that make the codebase harder to maintain over time.
The Real Value of AI in Development
AI is very good at patterns. That's the entire foundation of how LLMs work. This makes them useful for things humans often miss:
- Spotting inconsistencies
- Finding edge cases
- Catching hidden bugs
- Recognizing when something doesn't match established conventions
This is where the value is. Not in generating code faster, but in catching the things we overlook. AI can review a pull request and notice that a new method duplicates functionality that already exists three files away. It can flag that a variable name doesn't match the naming pattern used everywhere else.
But it only does this well if you give it the context to understand what "correct" looks like in your project.
The Problem with Vibe Coding
There's a style of development floating around where you sit on the couch with your laptop and let AI build entire features while you steer from a distance. Prompting your way through an application without ever really reading the code it produces.
I think there's a place for this. If you're building something fun as a proof of concept, exploring an idea, or learning how something works, go for it. Let the AI run. See what it produces. Break things.
But this is not how you build production software.
The moment you're working on something real that needs to last, the approach has to change. You can't blindly accept what AI gives you. You have to read it, understand it, and often reject it.
Build Better, Not More
The temptation with AI is to use it as a shortcut. Write more code faster. Ship more features in less time. But speed without direction just creates more to maintain later.
I've worked on projects where AI-generated code introduced the same helper function in three different places. Where similar validation logic got written slightly differently each time it was necessary. Where the path of least resistance was always "just generate something new" instead of finding what already exists.
This is what happens when AI is treated as a replacement for understanding your own codebase.
The better use of AI is to build better, not more. Let it suggest approaches, then actually evaluate whether they fit. Let it review your work and catch what you missed. That's where the value is.
AI Is Only as Good as the Person Using It
None of this works automatically. AI doesn't arrive with opinions about how your project should be structured. It doesn't inherently know what good looks like. It reflects what you give it.
Give it clear instructions through files like AGENTS.md, and it has a shot at producing code that fits your standards. Review its output critically, and you catch the redundancies before they ship. Treat it as a collaborator that needs guidance, not a magic code generator.
The developers getting the most out of AI right now aren't the ones prompting the fastest. They're the ones who've taken the time to set up the guardrails, define the expectations, and maintain the discipline to reject and refine until the output actually meets their standards.
A Practical Starting Point
I usually start a new project by creating an AGENTS.md file with conventions specific to that codebase. Often I'll copy one from a previous project and adjust it. It doesn't need to be exhaustive on day one, just enough to establish the patterns you want followed.
Since Claude Code doesn't support AGENTS.md yet, I created a simple CLAUDE.md file that forces it to read my conventions:
# CLAUDE.md
**BEFORE ANY TASK:** Read and follow [AGENTS.md](./AGENTS.md) completely.That's it. One line that bridges the gap until native support exists. The conventions live in AGENTS.md where other tools can use them too, and Claude gets pointed there automatically.
What a Good AGENTS.md Looks Like
You don't need to write a novel. A useful AGENTS.md covers the essentials and grows over time. Here's a structure that works well:
# AGENTS
This document defines the standards and contribution rules for all Agents working on this repository.
## Purpose
What is this project? What rules must agents follow?
## Tech Stack
What technologies are in use? What should not be added?
## Core Principles
The non-negotiable standards for this codebase.
## Content Standards
How should files be structured? What formatting rules apply?
## Component Guidelines
When and how to create reusable pieces.
## Dependency Rules
What can be added? What requires approval?
## Required Checks
What must pass before committing changes?
## Forbidden Practices
What should never happen in this codebase?Each section should be direct and specific to your project. Under Purpose, state what the project is and set expectations upfront. Under Core Principles, list the things that matter most, like "consistency over novelty" or "no server-side assumptions." Under Forbidden Practices, call out the things you've seen go wrong before, like renaming URLs without permission or introducing unnecessary dependencies.
The goal isn't to anticipate every scenario. It's to give AI enough context to make reasonable decisions and enough boundaries to avoid the mistakes you've already learned from.
A Note on Formatting
One thing I haven't fully explored yet: Claude and other AI tools can parse XML-style tags (like <rules> and </rules>) to group instructions. I've seen some developers use this for more complex rule sets. I'm planning to test whether it actually performs better than markdown. If I find anything useful, I'll write about it. For now, markdown has been enough.
Put in the Work
If you're using AI in development, and you don't have an AGENTS.md or similar file in your project, start there. Write down your conventions. Document the patterns. Give your AI the context it needs.
And when it produces something, don't just accept it. Read it. Question it. Compare it to what already exists. The goal isn't to let AI do your job. The goal is to use it as a tool that makes your work better.
Thanks for reading. If you want to stay updated on what I'm building and learning, follow me on Bluesky.
