Article

How AGENTS.md Improves AI Coding Agent Efficiency

A practical guide to how AGENTS.md files improve AI coding agent efficiency, and how engineering teams can apply these patterns in real repositories.

Most teams using AI coding agents run into the same issue after the initial excitement: the agent is capable, but inconsistent.

Sometimes it is fast and precise. Other times it wanders, burns tokens, and needs multiple correction loops.

Recent research on AI coding workflows studies a practical fix for that problem: add clear repository-level instructions through an AGENTS.md file.

What the Study Actually Tested

The authors evaluated AI agents on GitHub pull request tasks across:

  • 10 repositories,
  • 124 pull requests,
  • two setups: with and without an AGENTS.md file.

They measured operational outcomes like wall-clock runtime and token usage.

The Main Results in Plain English

With AGENTS.md present, agents showed:

  • lower median runtime,
  • lower output token consumption,
  • comparable task completion behavior.

In their setup, the median improvements were substantial: about 28.64% lower runtime and 16.58% lower output token usage.

If you run agents repeatedly in PR workflows, those gains compound quickly.

Why This Works

Agents are pattern followers. Ambiguous repos force them to infer intent from scattered files, comments, and conventions. That usually costs time and tokens.

A good AGENTS.md reduces that ambiguity by making expectations explicit:

  • coding style and architecture constraints,
  • test and validation requirements,
  • file editing boundaries,
  • commit and PR conventions,
  • and “do/do not” guardrails.

The less guessing the agent has to do, the more directly it can execute.

How Teams Can Apply This Immediately

The practical playbook is simple:

  1. Create an AGENTS.md at the repository root.
  2. Keep instructions specific and operational, not aspirational.
  3. Align the file with your real build, test, and review workflow.
  4. Update it when conventions or tooling change.

Treat it like a small but important piece of engineering infrastructure.

What to Include in AGENTS.md

Start lean, then iterate. A strong baseline usually includes:

  • project overview,
  • allowed tools and commands,
  • coding standards,
  • testing requirements,
  • forbidden actions,
  • and output format expectations.

Avoid vague guidance like “write clean code.” Instead, provide rules that are easy to execute and easy to verify.

Important Caveats

These results are strong, but still context-dependent:

  • Results depend on task type.
  • Model and tooling versions can shift outcomes.
  • Different repositories may see different magnitudes of gain.

So the exact percentage gains will vary. But the direction of the result is intuitive and useful: better repository instructions generally make AI coding agents more efficient.

Bottom Line

If you want better behavior from autonomous coding agents, make your expectations explicit at the repository level. For most teams, an AGENTS.md file is a low-effort, high-leverage improvement.

← Back to blog